Test Report: Hyperkit_macOS 19302

                    
                      686e9da65a2d4195f8e8610efbc417c3b07d1722:2024-07-18:35410
                    
                

Test fail (4/345)

Order failed test Duration
253 TestMultiNode/serial/RestartMultiNode 187.78
260 TestScheduledStopUnix 81.14
309 TestNoKubernetes/serial/StartNoArgs 78.28
323 TestNetworkPlugins/group/false/Start 75.78
x
+
TestMultiNode/serial/RestartMultiNode (187.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0718 21:09:16.896634    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 21:10:10.536245    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 21:11:13.840785    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : exit status 90 (3m4.220452807s)

                                                
                                                
-- stdout --
	* [multinode-127000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "multinode-127000" primary control-plane node in "multinode-127000" cluster
	* Restarting existing hyperkit VM for "multinode-127000" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-127000-m02" worker node in "multinode-127000" cluster
	* Restarting existing hyperkit VM for "multinode-127000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.17
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:07.732715    5402 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:07.732921    5402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:07.732927    5402 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:07.732930    5402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:07.733092    5402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 21:09:07.734586    5402 out.go:298] Setting JSON to false
	I0718 21:09:07.756885    5402 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4121,"bootTime":1721358026,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0718 21:09:07.756981    5402 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:09:07.779533    5402 out.go:177] * [multinode-127000] minikube v1.33.1 on Darwin 14.5
	I0718 21:09:07.821838    5402 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:09:07.821863    5402 notify.go:220] Checking for updates...
	I0718 21:09:07.864576    5402 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 21:09:07.885883    5402 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 21:09:07.908876    5402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:09:07.929805    5402 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	I0718 21:09:07.951053    5402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:09:07.972764    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:07.973450    5402 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:07.973523    5402 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:09:07.983201    5402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53419
	I0718 21:09:07.983736    5402 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:09:07.984263    5402 main.go:141] libmachine: Using API Version  1
	I0718 21:09:07.984272    5402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:09:07.984589    5402 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:09:07.984776    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:07.984989    5402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:09:07.985254    5402 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:07.985277    5402 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:09:07.993975    5402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53421
	I0718 21:09:07.994365    5402 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:09:07.994783    5402 main.go:141] libmachine: Using API Version  1
	I0718 21:09:07.994824    5402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:09:07.995030    5402 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:09:07.995222    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:08.023599    5402 out.go:177] * Using the hyperkit driver based on existing profile
	I0718 21:09:08.065849    5402 start.go:297] selected driver: hyperkit
	I0718 21:09:08.065899    5402 start.go:901] validating driver "hyperkit" against &{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false k
ubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:09:08.066121    5402 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:09:08.066321    5402 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:09:08.066519    5402 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1411/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0718 21:09:08.075964    5402 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0718 21:09:08.080379    5402 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:08.080402    5402 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0718 21:09:08.083236    5402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:09:08.083295    5402 cni.go:84] Creating CNI manager for ""
	I0718 21:09:08.083305    5402 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0718 21:09:08.083377    5402 start.go:340] cluster config:
	{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plu
gin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:09:08.083485    5402 iso.go:125] acquiring lock: {Name:mka3a56e9fb30ac1fad44235cb5c998fd919cd8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:09:08.127919    5402 out.go:177] * Starting "multinode-127000" primary control-plane node in "multinode-127000" cluster
	I0718 21:09:08.149869    5402 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:09:08.149938    5402 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 21:09:08.149965    5402 cache.go:56] Caching tarball of preloaded images
	I0718 21:09:08.150175    5402 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:09:08.150197    5402 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:09:08.150374    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:09:08.151214    5402 start.go:360] acquireMachinesLock for multinode-127000: {Name:mk8a0ac4b11cd5d9eba5ac8b9ae33317742f9112 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:09:08.151330    5402 start.go:364] duration metric: took 93.269µs to acquireMachinesLock for "multinode-127000"
	I0718 21:09:08.151385    5402 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:09:08.151406    5402 fix.go:54] fixHost starting: 
	I0718 21:09:08.151801    5402 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:08.151863    5402 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:09:08.161189    5402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53423
	I0718 21:09:08.161603    5402 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:09:08.161963    5402 main.go:141] libmachine: Using API Version  1
	I0718 21:09:08.161979    5402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:09:08.162222    5402 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:09:08.162354    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:08.162457    5402 main.go:141] libmachine: (multinode-127000) Calling .GetState
	I0718 21:09:08.162545    5402 main.go:141] libmachine: (multinode-127000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:09:08.162617    5402 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid from json: 5329
	I0718 21:09:08.163559    5402 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid 5329 missing from process table
	I0718 21:09:08.163604    5402 fix.go:112] recreateIfNeeded on multinode-127000: state=Stopped err=<nil>
	I0718 21:09:08.163621    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	W0718 21:09:08.163709    5402 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:09:08.205879    5402 out.go:177] * Restarting existing hyperkit VM for "multinode-127000" ...
	I0718 21:09:08.228986    5402 main.go:141] libmachine: (multinode-127000) Calling .Start
	I0718 21:09:08.229275    5402 main.go:141] libmachine: (multinode-127000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:09:08.229342    5402 main.go:141] libmachine: (multinode-127000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/hyperkit.pid
	I0718 21:09:08.231130    5402 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid 5329 missing from process table
	I0718 21:09:08.231149    5402 main.go:141] libmachine: (multinode-127000) DBG | pid 5329 is in state "Stopped"
	I0718 21:09:08.231169    5402 main.go:141] libmachine: (multinode-127000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/hyperkit.pid...
	I0718 21:09:08.231357    5402 main.go:141] libmachine: (multinode-127000) DBG | Using UUID 2170d403-7108-4d79-a7e1-5094631d4682
	I0718 21:09:08.344896    5402 main.go:141] libmachine: (multinode-127000) DBG | Generated MAC d2:e2:11:67:74:1c
	I0718 21:09:08.344923    5402 main.go:141] libmachine: (multinode-127000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000
	I0718 21:09:08.345052    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2170d403-7108-4d79-a7e1-5094631d4682", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bcc60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0718 21:09:08.345084    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2170d403-7108-4d79-a7e1-5094631d4682", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bcc60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0718 21:09:08.345131    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2170d403-7108-4d79-a7e1-5094631d4682", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/multinode-127000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/bzimage,/Users/jenkins/minikube-integration/1930
2-1411/.minikube/machines/multinode-127000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000"}
	I0718 21:09:08.345176    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2170d403-7108-4d79-a7e1-5094631d4682 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/multinode-127000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/console-ring -f kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/bzimage,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000"
	I0718 21:09:08.345194    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0718 21:09:08.346583    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: Pid is 5415
	I0718 21:09:08.346921    5402 main.go:141] libmachine: (multinode-127000) DBG | Attempt 0
	I0718 21:09:08.346933    5402 main.go:141] libmachine: (multinode-127000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:09:08.346983    5402 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid from json: 5415
	I0718 21:09:08.348667    5402 main.go:141] libmachine: (multinode-127000) DBG | Searching for d2:e2:11:67:74:1c in /var/db/dhcpd_leases ...
	I0718 21:09:08.348752    5402 main.go:141] libmachine: (multinode-127000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0718 21:09:08.348779    5402 main.go:141] libmachine: (multinode-127000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:4c:de:4f:d8:27 ID:1,6:4c:de:4f:d8:27 Lease:0x6699e6d1}
	I0718 21:09:08.348790    5402 main.go:141] libmachine: (multinode-127000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:d2:59:42:45:2c ID:1,3a:d2:59:42:45:2c Lease:0x669b37f6}
	I0718 21:09:08.348820    5402 main.go:141] libmachine: (multinode-127000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:d2:e2:11:67:74:1c ID:1,d2:e2:11:67:74:1c Lease:0x669b37bb}
	I0718 21:09:08.348836    5402 main.go:141] libmachine: (multinode-127000) DBG | Found match: d2:e2:11:67:74:1c
	I0718 21:09:08.348849    5402 main.go:141] libmachine: (multinode-127000) DBG | IP: 192.169.0.17
	I0718 21:09:08.348880    5402 main.go:141] libmachine: (multinode-127000) Calling .GetConfigRaw
	I0718 21:09:08.349504    5402 main.go:141] libmachine: (multinode-127000) Calling .GetIP
	I0718 21:09:08.349706    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:09:08.350127    5402 machine.go:94] provisionDockerMachine start ...
	I0718 21:09:08.350136    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:08.350259    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:08.350365    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:08.350483    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:08.350628    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:08.350765    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:08.350926    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:08.351138    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:08.351147    5402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 21:09:08.355168    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0718 21:09:08.408798    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0718 21:09:08.409496    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:09:08.409516    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:09:08.409528    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:09:08.409535    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:09:08.789011    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0718 21:09:08.789027    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0718 21:09:08.903629    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:09:08.903646    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:09:08.903672    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:09:08.903690    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:09:08.904533    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0718 21:09:08.904545    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0718 21:09:14.174877    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0718 21:09:14.175009    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0718 21:09:14.175020    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0718 21:09:14.198773    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:14 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0718 21:09:43.418102    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 21:09:43.418117    5402 main.go:141] libmachine: (multinode-127000) Calling .GetMachineName
	I0718 21:09:43.418258    5402 buildroot.go:166] provisioning hostname "multinode-127000"
	I0718 21:09:43.418270    5402 main.go:141] libmachine: (multinode-127000) Calling .GetMachineName
	I0718 21:09:43.418369    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.418470    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.418556    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.418655    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.418767    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.418894    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.419109    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.419118    5402 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-127000 && echo "multinode-127000" | sudo tee /etc/hostname
	I0718 21:09:43.482187    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-127000
	
	I0718 21:09:43.482205    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.482350    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.482456    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.482541    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.482641    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.482771    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.482924    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.482937    5402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-127000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-127000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-127000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 21:09:43.540730    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:09:43.540752    5402 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1411/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1411/.minikube}
	I0718 21:09:43.540774    5402 buildroot.go:174] setting up certificates
	I0718 21:09:43.540782    5402 provision.go:84] configureAuth start
	I0718 21:09:43.540790    5402 main.go:141] libmachine: (multinode-127000) Calling .GetMachineName
	I0718 21:09:43.540933    5402 main.go:141] libmachine: (multinode-127000) Calling .GetIP
	I0718 21:09:43.541030    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.541125    5402 provision.go:143] copyHostCerts
	I0718 21:09:43.541157    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem
	I0718 21:09:43.541228    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem, removing ...
	I0718 21:09:43.541237    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem
	I0718 21:09:43.541397    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem (1082 bytes)
	I0718 21:09:43.541623    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem
	I0718 21:09:43.541669    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem, removing ...
	I0718 21:09:43.541674    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem
	I0718 21:09:43.541839    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem (1123 bytes)
	I0718 21:09:43.542026    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem
	I0718 21:09:43.542071    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem, removing ...
	I0718 21:09:43.542077    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem
	I0718 21:09:43.542168    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem (1675 bytes)
	I0718 21:09:43.542315    5402 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem org=jenkins.multinode-127000 san=[127.0.0.1 192.169.0.17 localhost minikube multinode-127000]
	I0718 21:09:43.620132    5402 provision.go:177] copyRemoteCerts
	I0718 21:09:43.620181    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 21:09:43.620198    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.620323    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.620415    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.620506    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.620604    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/id_rsa Username:docker}
	I0718 21:09:43.654508    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 21:09:43.654583    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 21:09:43.674626    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 21:09:43.674687    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0718 21:09:43.694129    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 21:09:43.694187    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 21:09:43.713871    5402 provision.go:87] duration metric: took 173.061203ms to configureAuth
	I0718 21:09:43.713883    5402 buildroot.go:189] setting minikube options for container-runtime
	I0718 21:09:43.714045    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:43.714059    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:43.714199    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.714288    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.714371    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.714466    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.714549    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.714663    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.714814    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.714822    5402 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 21:09:43.767743    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 21:09:43.767754    5402 buildroot.go:70] root file system type: tmpfs
	I0718 21:09:43.767831    5402 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 21:09:43.767846    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.767976    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.768060    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.768152    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.768246    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.768391    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.768536    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.768581    5402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 21:09:43.833695    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 21:09:43.833722    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.833864    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.833950    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.834031    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.834130    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.834249    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.834390    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.834403    5402 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 21:09:45.516084    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 21:09:45.516103    5402 machine.go:97] duration metric: took 37.164865517s to provisionDockerMachine
	I0718 21:09:45.516116    5402 start.go:293] postStartSetup for "multinode-127000" (driver="hyperkit")
	I0718 21:09:45.516123    5402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 21:09:45.516135    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.516307    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 21:09:45.516321    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:45.516419    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:45.516513    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.516597    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:45.516676    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/id_rsa Username:docker}
	I0718 21:09:45.550620    5402 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 21:09:45.553761    5402 command_runner.go:130] > NAME=Buildroot
	I0718 21:09:45.553770    5402 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0718 21:09:45.553774    5402 command_runner.go:130] > ID=buildroot
	I0718 21:09:45.553777    5402 command_runner.go:130] > VERSION_ID=2023.02.9
	I0718 21:09:45.553781    5402 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0718 21:09:45.553876    5402 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 21:09:45.553886    5402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/addons for local assets ...
	I0718 21:09:45.553983    5402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/files for local assets ...
	I0718 21:09:45.554167    5402 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> 19482.pem in /etc/ssl/certs
	I0718 21:09:45.554173    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> /etc/ssl/certs/19482.pem
	I0718 21:09:45.554390    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 21:09:45.561542    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem --> /etc/ssl/certs/19482.pem (1708 bytes)
	I0718 21:09:45.581669    5402 start.go:296] duration metric: took 65.543633ms for postStartSetup
	I0718 21:09:45.581690    5402 fix.go:56] duration metric: took 37.429182854s for fixHost
	I0718 21:09:45.581703    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:45.581843    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:45.581945    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.582055    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.582146    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:45.582254    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:45.582386    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:45.582393    5402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0718 21:09:45.632928    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362185.710404724
	
	I0718 21:09:45.632949    5402 fix.go:216] guest clock: 1721362185.710404724
	I0718 21:09:45.632955    5402 fix.go:229] Guest: 2024-07-18 21:09:45.710404724 -0700 PDT Remote: 2024-07-18 21:09:45.581693 -0700 PDT m=+37.883450204 (delta=128.711724ms)
	I0718 21:09:45.632975    5402 fix.go:200] guest clock delta is within tolerance: 128.711724ms
	I0718 21:09:45.632978    5402 start.go:83] releasing machines lock for "multinode-127000", held for 37.480526357s
	I0718 21:09:45.632998    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.633133    5402 main.go:141] libmachine: (multinode-127000) Calling .GetIP
	I0718 21:09:45.633238    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.633575    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.633677    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.633764    5402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 21:09:45.633803    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:45.633838    5402 ssh_runner.go:195] Run: cat /version.json
	I0718 21:09:45.633849    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:45.633914    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:45.633939    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:45.634011    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.634039    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.634109    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:45.634134    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:45.634203    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/id_rsa Username:docker}
	I0718 21:09:45.634230    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/id_rsa Username:docker}
	I0718 21:09:45.709482    5402 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0718 21:09:45.710315    5402 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0718 21:09:45.710510    5402 ssh_runner.go:195] Run: systemctl --version
	I0718 21:09:45.715584    5402 command_runner.go:130] > systemd 252 (252)
	I0718 21:09:45.715600    5402 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0718 21:09:45.715795    5402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 21:09:45.720069    5402 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0718 21:09:45.720100    5402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 21:09:45.720136    5402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 21:09:45.732422    5402 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0718 21:09:45.732459    5402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 21:09:45.732466    5402 start.go:495] detecting cgroup driver to use...
	I0718 21:09:45.732558    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:09:45.747185    5402 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0718 21:09:45.747458    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 21:09:45.756153    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 21:09:45.764901    5402 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 21:09:45.764942    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 21:09:45.773526    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:09:45.782247    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 21:09:45.790817    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:09:45.799374    5402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 21:09:45.808402    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 21:09:45.817051    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 21:09:45.825770    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 21:09:45.834349    5402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 21:09:45.842207    5402 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0718 21:09:45.842377    5402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 21:09:45.850300    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:45.950029    5402 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 21:09:45.969523    5402 start.go:495] detecting cgroup driver to use...
	I0718 21:09:45.969602    5402 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 21:09:45.987662    5402 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0718 21:09:45.988224    5402 command_runner.go:130] > [Unit]
	I0718 21:09:45.988244    5402 command_runner.go:130] > Description=Docker Application Container Engine
	I0718 21:09:45.988264    5402 command_runner.go:130] > Documentation=https://docs.docker.com
	I0718 21:09:45.988275    5402 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0718 21:09:45.988280    5402 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0718 21:09:45.988284    5402 command_runner.go:130] > StartLimitBurst=3
	I0718 21:09:45.988288    5402 command_runner.go:130] > StartLimitIntervalSec=60
	I0718 21:09:45.988291    5402 command_runner.go:130] > [Service]
	I0718 21:09:45.988295    5402 command_runner.go:130] > Type=notify
	I0718 21:09:45.988298    5402 command_runner.go:130] > Restart=on-failure
	I0718 21:09:45.988305    5402 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0718 21:09:45.988319    5402 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0718 21:09:45.988326    5402 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0718 21:09:45.988331    5402 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0718 21:09:45.988337    5402 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0718 21:09:45.988345    5402 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0718 21:09:45.988352    5402 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0718 21:09:45.988360    5402 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0718 21:09:45.988366    5402 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0718 21:09:45.988372    5402 command_runner.go:130] > ExecStart=
	I0718 21:09:45.988386    5402 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0718 21:09:45.988390    5402 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0718 21:09:45.988397    5402 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0718 21:09:45.988403    5402 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0718 21:09:45.988406    5402 command_runner.go:130] > LimitNOFILE=infinity
	I0718 21:09:45.988411    5402 command_runner.go:130] > LimitNPROC=infinity
	I0718 21:09:45.988425    5402 command_runner.go:130] > LimitCORE=infinity
	I0718 21:09:45.988433    5402 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0718 21:09:45.988437    5402 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0718 21:09:45.988441    5402 command_runner.go:130] > TasksMax=infinity
	I0718 21:09:45.988445    5402 command_runner.go:130] > TimeoutStartSec=0
	I0718 21:09:45.988450    5402 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0718 21:09:45.988454    5402 command_runner.go:130] > Delegate=yes
	I0718 21:09:45.988459    5402 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0718 21:09:45.988464    5402 command_runner.go:130] > KillMode=process
	I0718 21:09:45.988467    5402 command_runner.go:130] > [Install]
	I0718 21:09:45.988481    5402 command_runner.go:130] > WantedBy=multi-user.target
	I0718 21:09:45.988559    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:09:46.000036    5402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 21:09:46.013593    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:09:46.024209    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:09:46.034558    5402 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 21:09:46.056896    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:09:46.067341    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:09:46.082374    5402 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0718 21:09:46.082726    5402 ssh_runner.go:195] Run: which cri-dockerd
	I0718 21:09:46.085565    5402 command_runner.go:130] > /usr/bin/cri-dockerd
	I0718 21:09:46.085708    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 21:09:46.093034    5402 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 21:09:46.106748    5402 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 21:09:46.201556    5402 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 21:09:46.317115    5402 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 21:09:46.317180    5402 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 21:09:46.332170    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:46.431120    5402 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:09:48.791524    5402 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.360305433s)
	I0718 21:09:48.791584    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 21:09:48.801762    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:09:48.811580    5402 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 21:09:48.904862    5402 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 21:09:49.019139    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:49.131102    5402 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 21:09:49.144595    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:09:49.155003    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:49.248782    5402 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 21:09:49.313017    5402 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 21:09:49.313094    5402 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 21:09:49.317280    5402 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0718 21:09:49.317291    5402 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0718 21:09:49.317296    5402 command_runner.go:130] > Device: 0,22	Inode: 760         Links: 1
	I0718 21:09:49.317301    5402 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0718 21:09:49.317305    5402 command_runner.go:130] > Access: 2024-07-19 04:09:49.339736965 +0000
	I0718 21:09:49.317309    5402 command_runner.go:130] > Modify: 2024-07-19 04:09:49.339736965 +0000
	I0718 21:09:49.317313    5402 command_runner.go:130] > Change: 2024-07-19 04:09:49.341736965 +0000
	I0718 21:09:49.317317    5402 command_runner.go:130] >  Birth: -
	I0718 21:09:49.317499    5402 start.go:563] Will wait 60s for crictl version
	I0718 21:09:49.317556    5402 ssh_runner.go:195] Run: which crictl
	I0718 21:09:49.320486    5402 command_runner.go:130] > /usr/bin/crictl
	I0718 21:09:49.320654    5402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 21:09:49.347451    5402 command_runner.go:130] > Version:  0.1.0
	I0718 21:09:49.347463    5402 command_runner.go:130] > RuntimeName:  docker
	I0718 21:09:49.347467    5402 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0718 21:09:49.347471    5402 command_runner.go:130] > RuntimeApiVersion:  v1
	I0718 21:09:49.348520    5402 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 21:09:49.348590    5402 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:09:49.365113    5402 command_runner.go:130] > 27.0.3
	I0718 21:09:49.366052    5402 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:09:49.383647    5402 command_runner.go:130] > 27.0.3
	I0718 21:09:49.426196    5402 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 21:09:49.426245    5402 main.go:141] libmachine: (multinode-127000) Calling .GetIP
	I0718 21:09:49.426631    5402 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0718 21:09:49.431146    5402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 21:09:49.440685    5402 kubeadm.go:883] updating cluster {Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0718 21:09:49.440771    5402 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:09:49.440834    5402 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:09:49.452806    5402 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0718 21:09:49.452819    5402 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0718 21:09:49.452824    5402 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0718 21:09:49.452840    5402 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0718 21:09:49.452845    5402 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0718 21:09:49.452849    5402 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0718 21:09:49.452856    5402 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0718 21:09:49.452860    5402 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0718 21:09:49.452864    5402 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:09:49.452869    5402 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0718 21:09:49.453467    5402 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0718 21:09:49.453475    5402 docker.go:615] Images already preloaded, skipping extraction
	I0718 21:09:49.453541    5402 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:09:49.466729    5402 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0718 21:09:49.466741    5402 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0718 21:09:49.466746    5402 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0718 21:09:49.466750    5402 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0718 21:09:49.466754    5402 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0718 21:09:49.466757    5402 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0718 21:09:49.466762    5402 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0718 21:09:49.466766    5402 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0718 21:09:49.466770    5402 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:09:49.466774    5402 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0718 21:09:49.466808    5402 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0718 21:09:49.466823    5402 cache_images.go:84] Images are preloaded, skipping loading
	I0718 21:09:49.466833    5402 kubeadm.go:934] updating node { 192.169.0.17 8443 v1.30.3 docker true true} ...
	I0718 21:09:49.466921    5402 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-127000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 21:09:49.466989    5402 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 21:09:49.486123    5402 command_runner.go:130] > cgroupfs
	I0718 21:09:49.487120    5402 cni.go:84] Creating CNI manager for ""
	I0718 21:09:49.487129    5402 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0718 21:09:49.487140    5402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 21:09:49.487156    5402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.17 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-127000 NodeName:multinode-127000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 21:09:49.487241    5402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-127000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 21:09:49.487301    5402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 21:09:49.494732    5402 command_runner.go:130] > kubeadm
	I0718 21:09:49.494740    5402 command_runner.go:130] > kubectl
	I0718 21:09:49.494744    5402 command_runner.go:130] > kubelet
	I0718 21:09:49.494799    5402 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 21:09:49.494840    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0718 21:09:49.502358    5402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0718 21:09:49.516286    5402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 21:09:49.529475    5402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0718 21:09:49.543040    5402 ssh_runner.go:195] Run: grep 192.169.0.17	control-plane.minikube.internal$ /etc/hosts
	I0718 21:09:49.545881    5402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 21:09:49.554943    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:49.644775    5402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:09:49.660239    5402 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000 for IP: 192.169.0.17
	I0718 21:09:49.660251    5402 certs.go:194] generating shared ca certs ...
	I0718 21:09:49.660273    5402 certs.go:226] acquiring lock for ca certs: {Name:mka1585510108908e8b36055df3736f0521555f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:09:49.660467    5402 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.key
	I0718 21:09:49.660547    5402 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.key
	I0718 21:09:49.660557    5402 certs.go:256] generating profile certs ...
	I0718 21:09:49.660678    5402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/client.key
	I0718 21:09:49.660759    5402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.key.b7156be1
	I0718 21:09:49.660831    5402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.key
	I0718 21:09:49.660838    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 21:09:49.660859    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 21:09:49.660877    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 21:09:49.660898    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 21:09:49.660916    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 21:09:49.660945    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 21:09:49.660976    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 21:09:49.660995    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 21:09:49.661097    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/1948.pem (1338 bytes)
	W0718 21:09:49.661144    5402 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/1948_empty.pem, impossibly tiny 0 bytes
	I0718 21:09:49.661153    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem (1679 bytes)
	I0718 21:09:49.661188    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem (1082 bytes)
	I0718 21:09:49.661223    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem (1123 bytes)
	I0718 21:09:49.661254    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem (1675 bytes)
	I0718 21:09:49.661322    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem (1708 bytes)
	I0718 21:09:49.661354    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.661375    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.661393    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/1948.pem -> /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.661859    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 21:09:49.696993    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 21:09:49.720296    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 21:09:49.748941    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 21:09:49.772048    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0718 21:09:49.791992    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 21:09:49.811717    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 21:09:49.831177    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0718 21:09:49.850401    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem --> /usr/share/ca-certificates/19482.pem (1708 bytes)
	I0718 21:09:49.869625    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 21:09:49.888390    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/1948.pem --> /usr/share/ca-certificates/1948.pem (1338 bytes)
	I0718 21:09:49.907734    5402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 21:09:49.921233    5402 ssh_runner.go:195] Run: openssl version
	I0718 21:09:49.925446    5402 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0718 21:09:49.925636    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 21:09:49.934756    5402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.938109    5402 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 03:28 /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.938268    5402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:28 /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.938317    5402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.942407    5402 command_runner.go:130] > b5213941
	I0718 21:09:49.942629    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 21:09:49.951988    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1948.pem && ln -fs /usr/share/ca-certificates/1948.pem /etc/ssl/certs/1948.pem"
	I0718 21:09:49.961221    5402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.964542    5402 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 03:36 /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.964640    5402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:36 /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.964680    5402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.968968    5402 command_runner.go:130] > 51391683
	I0718 21:09:49.969144    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1948.pem /etc/ssl/certs/51391683.0"
	I0718 21:09:49.978200    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19482.pem && ln -fs /usr/share/ca-certificates/19482.pem /etc/ssl/certs/19482.pem"
	I0718 21:09:49.987317    5402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.990507    5402 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 03:36 /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.990626    5402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:36 /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.990659    5402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.994688    5402 command_runner.go:130] > 3ec20f2e
	I0718 21:09:49.994832    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19482.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 21:09:50.003995    5402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 21:09:50.007252    5402 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 21:09:50.007264    5402 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0718 21:09:50.007271    5402 command_runner.go:130] > Device: 253,1	Inode: 6290248     Links: 1
	I0718 21:09:50.007279    5402 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 21:09:50.007287    5402 command_runner.go:130] > Access: 2024-07-19 04:06:29.123168803 +0000
	I0718 21:09:50.007294    5402 command_runner.go:130] > Modify: 2024-07-19 04:02:41.815943936 +0000
	I0718 21:09:50.007302    5402 command_runner.go:130] > Change: 2024-07-19 04:02:41.815943936 +0000
	I0718 21:09:50.007307    5402 command_runner.go:130] >  Birth: 2024-07-19 04:02:41.815943936 +0000
	I0718 21:09:50.007412    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0718 21:09:50.011504    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.011687    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0718 21:09:50.015766    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.015962    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0718 21:09:50.020143    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.020296    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0718 21:09:50.024353    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.024533    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0718 21:09:50.028589    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.028732    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0718 21:09:50.032988    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.033032    5402 kubeadm.go:392] StartCluster: {Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:09:50.033135    5402 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:09:50.046911    5402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 21:09:50.055273    5402 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0718 21:09:50.055282    5402 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0718 21:09:50.055286    5402 command_runner.go:130] > /var/lib/minikube/etcd:
	I0718 21:09:50.055289    5402 command_runner.go:130] > member
	I0718 21:09:50.055366    5402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0718 21:09:50.055377    5402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0718 21:09:50.055418    5402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0718 21:09:50.063600    5402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:09:50.063910    5402 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-127000" does not appear in /Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 21:09:50.063998    5402 kubeconfig.go:62] /Users/jenkins/minikube-integration/19302-1411/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-127000" cluster setting kubeconfig missing "multinode-127000" context setting]
	I0718 21:09:50.064200    5402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1411/kubeconfig: {Name:mk98b5ca4921c9b1e25bd07d5b44b266493ad1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:09:50.064773    5402 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 21:09:50.064994    5402 kapi.go:59] client config for multinode-127000: &rest.Config{Host:"https://192.169.0.17:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x476bba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 21:09:50.065324    5402 cert_rotation.go:137] Starting client certificate rotation controller
	I0718 21:09:50.065495    5402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0718 21:09:50.073560    5402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.17
	I0718 21:09:50.073575    5402 kubeadm.go:1160] stopping kube-system containers ...
	I0718 21:09:50.073644    5402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:09:50.088335    5402 command_runner.go:130] > 6396364b3e0e
	I0718 21:09:50.088346    5402 command_runner.go:130] > 1368162a8f09
	I0718 21:09:50.088349    5402 command_runner.go:130] > dff6311790e5
	I0718 21:09:50.088352    5402 command_runner.go:130] > d4dc52db5a77
	I0718 21:09:50.088355    5402 command_runner.go:130] > 3e7cc50a2d57
	I0718 21:09:50.088358    5402 command_runner.go:130] > e12c9aa28fc6
	I0718 21:09:50.088361    5402 command_runner.go:130] > e8e8bc7a035c
	I0718 21:09:50.088365    5402 command_runner.go:130] > f3dc8d1aa918
	I0718 21:09:50.088368    5402 command_runner.go:130] > fda4eb380979
	I0718 21:09:50.088371    5402 command_runner.go:130] > f8a7e04c5c8e
	I0718 21:09:50.088374    5402 command_runner.go:130] > 35aa60e7a3f8
	I0718 21:09:50.088377    5402 command_runner.go:130] > 539be4bab7a7
	I0718 21:09:50.088380    5402 command_runner.go:130] > 26653cd0d581
	I0718 21:09:50.088383    5402 command_runner.go:130] > 1acc8e66b837
	I0718 21:09:50.088389    5402 command_runner.go:130] > ac73727fe777
	I0718 21:09:50.088393    5402 command_runner.go:130] > 579e883db8c6
	I0718 21:09:50.088396    5402 command_runner.go:130] > a77ea521ac99
	I0718 21:09:50.088407    5402 command_runner.go:130] > 743f61bb3d97
	I0718 21:09:50.088411    5402 command_runner.go:130] > 7eaf0af2d35e
	I0718 21:09:50.088415    5402 command_runner.go:130] > 2a8b01139615
	I0718 21:09:50.088418    5402 command_runner.go:130] > 995c0513497e
	I0718 21:09:50.088423    5402 command_runner.go:130] > 6ca51eca7060
	I0718 21:09:50.088427    5402 command_runner.go:130] > f3a95fa340e8
	I0718 21:09:50.088430    5402 command_runner.go:130] > 6d37e86392a7
	I0718 21:09:50.088434    5402 command_runner.go:130] > 7ed33e97b1ef
	I0718 21:09:50.088437    5402 command_runner.go:130] > 75b247638cc4
	I0718 21:09:50.088441    5402 command_runner.go:130] > 10560abb7f24
	I0718 21:09:50.088445    5402 command_runner.go:130] > f0d043288f29
	I0718 21:09:50.088448    5402 command_runner.go:130] > f12144ab85e8
	I0718 21:09:50.088451    5402 command_runner.go:130] > 94b0a5483d84
	I0718 21:09:50.088454    5402 command_runner.go:130] > cc26bfa07489
	I0718 21:09:50.088890    5402 docker.go:483] Stopping containers: [6396364b3e0e 1368162a8f09 dff6311790e5 d4dc52db5a77 3e7cc50a2d57 e12c9aa28fc6 e8e8bc7a035c f3dc8d1aa918 fda4eb380979 f8a7e04c5c8e 35aa60e7a3f8 539be4bab7a7 26653cd0d581 1acc8e66b837 ac73727fe777 579e883db8c6 a77ea521ac99 743f61bb3d97 7eaf0af2d35e 2a8b01139615 995c0513497e 6ca51eca7060 f3a95fa340e8 6d37e86392a7 7ed33e97b1ef 75b247638cc4 10560abb7f24 f0d043288f29 f12144ab85e8 94b0a5483d84 cc26bfa07489]
	I0718 21:09:50.088966    5402 ssh_runner.go:195] Run: docker stop 6396364b3e0e 1368162a8f09 dff6311790e5 d4dc52db5a77 3e7cc50a2d57 e12c9aa28fc6 e8e8bc7a035c f3dc8d1aa918 fda4eb380979 f8a7e04c5c8e 35aa60e7a3f8 539be4bab7a7 26653cd0d581 1acc8e66b837 ac73727fe777 579e883db8c6 a77ea521ac99 743f61bb3d97 7eaf0af2d35e 2a8b01139615 995c0513497e 6ca51eca7060 f3a95fa340e8 6d37e86392a7 7ed33e97b1ef 75b247638cc4 10560abb7f24 f0d043288f29 f12144ab85e8 94b0a5483d84 cc26bfa07489
	I0718 21:09:50.101792    5402 command_runner.go:130] > 6396364b3e0e
	I0718 21:09:50.101804    5402 command_runner.go:130] > 1368162a8f09
	I0718 21:09:50.101807    5402 command_runner.go:130] > dff6311790e5
	I0718 21:09:50.102884    5402 command_runner.go:130] > d4dc52db5a77
	I0718 21:09:50.105199    5402 command_runner.go:130] > 3e7cc50a2d57
	I0718 21:09:50.105339    5402 command_runner.go:130] > e12c9aa28fc6
	I0718 21:09:50.105425    5402 command_runner.go:130] > e8e8bc7a035c
	I0718 21:09:50.105466    5402 command_runner.go:130] > f3dc8d1aa918
	I0718 21:09:50.106091    5402 command_runner.go:130] > fda4eb380979
	I0718 21:09:50.106097    5402 command_runner.go:130] > f8a7e04c5c8e
	I0718 21:09:50.106100    5402 command_runner.go:130] > 35aa60e7a3f8
	I0718 21:09:50.106103    5402 command_runner.go:130] > 539be4bab7a7
	I0718 21:09:50.106106    5402 command_runner.go:130] > 26653cd0d581
	I0718 21:09:50.106109    5402 command_runner.go:130] > 1acc8e66b837
	I0718 21:09:50.106112    5402 command_runner.go:130] > ac73727fe777
	I0718 21:09:50.106115    5402 command_runner.go:130] > 579e883db8c6
	I0718 21:09:50.106118    5402 command_runner.go:130] > a77ea521ac99
	I0718 21:09:50.106122    5402 command_runner.go:130] > 743f61bb3d97
	I0718 21:09:50.106126    5402 command_runner.go:130] > 7eaf0af2d35e
	I0718 21:09:50.106157    5402 command_runner.go:130] > 2a8b01139615
	I0718 21:09:50.106213    5402 command_runner.go:130] > 995c0513497e
	I0718 21:09:50.106220    5402 command_runner.go:130] > 6ca51eca7060
	I0718 21:09:50.106223    5402 command_runner.go:130] > f3a95fa340e8
	I0718 21:09:50.106226    5402 command_runner.go:130] > 6d37e86392a7
	I0718 21:09:50.106238    5402 command_runner.go:130] > 7ed33e97b1ef
	I0718 21:09:50.106242    5402 command_runner.go:130] > 75b247638cc4
	I0718 21:09:50.106245    5402 command_runner.go:130] > 10560abb7f24
	I0718 21:09:50.106248    5402 command_runner.go:130] > f0d043288f29
	I0718 21:09:50.106252    5402 command_runner.go:130] > f12144ab85e8
	I0718 21:09:50.106258    5402 command_runner.go:130] > 94b0a5483d84
	I0718 21:09:50.106261    5402 command_runner.go:130] > cc26bfa07489
	I0718 21:09:50.107063    5402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0718 21:09:50.120194    5402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 21:09:50.128584    5402 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0718 21:09:50.128595    5402 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0718 21:09:50.128601    5402 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0718 21:09:50.128606    5402 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:09:50.128625    5402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:09:50.128630    5402 kubeadm.go:157] found existing configuration files:
	
	I0718 21:09:50.128678    5402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0718 21:09:50.137133    5402 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:09:50.137153    5402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:09:50.137193    5402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 21:09:50.145270    5402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0718 21:09:50.153173    5402 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:09:50.153199    5402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:09:50.153241    5402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 21:09:50.161259    5402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0718 21:09:50.168976    5402 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:09:50.169002    5402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:09:50.169035    5402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 21:09:50.177124    5402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0718 21:09:50.184903    5402 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:09:50.184923    5402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:09:50.184963    5402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 21:09:50.193193    5402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 21:09:50.201432    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:50.264206    5402 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 21:09:50.264376    5402 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0718 21:09:50.264550    5402 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0718 21:09:50.264702    5402 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0718 21:09:50.264936    5402 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0718 21:09:50.265099    5402 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0718 21:09:50.265438    5402 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0718 21:09:50.265603    5402 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0718 21:09:50.265807    5402 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0718 21:09:50.265968    5402 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0718 21:09:50.266129    5402 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0718 21:09:50.266362    5402 command_runner.go:130] > [certs] Using the existing "sa" key
	I0718 21:09:50.267280    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:50.305508    5402 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 21:09:50.512259    5402 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 21:09:50.682912    5402 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0718 21:09:50.850952    5402 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 21:09:51.139031    5402 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 21:09:51.231479    5402 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 21:09:51.233315    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:51.287873    5402 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 21:09:51.288489    5402 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 21:09:51.288539    5402 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0718 21:09:51.392192    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:51.451968    5402 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 21:09:51.451996    5402 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 21:09:51.453897    5402 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 21:09:51.454650    5402 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 21:09:51.455902    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:51.510173    5402 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 21:09:51.525091    5402 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:09:51.525156    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:52.025288    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:52.526915    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:53.025265    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:53.527315    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:53.539281    5402 command_runner.go:130] > 1628
	I0718 21:09:53.539501    5402 api_server.go:72] duration metric: took 2.014357194s to wait for apiserver process to appear ...
	I0718 21:09:53.539510    5402 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:09:53.539526    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:56.800670    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0718 21:09:56.800686    5402 api_server.go:103] status: https://192.169.0.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0718 21:09:56.800702    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:56.809886    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0718 21:09:56.809907    5402 api_server.go:103] status: https://192.169.0.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0718 21:09:57.039841    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:57.044192    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0718 21:09:57.044206    5402 api_server.go:103] status: https://192.169.0.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0718 21:09:57.539786    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:57.546110    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0718 21:09:57.546132    5402 api_server.go:103] status: https://192.169.0.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0718 21:09:58.041719    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:58.045667    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 200:
	ok
	I0718 21:09:58.045727    5402 round_trippers.go:463] GET https://192.169.0.17:8443/version
	I0718 21:09:58.045732    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:58.045740    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:58.045743    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:58.055667    5402 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0718 21:09:58.055680    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:58.055685    5402 round_trippers.go:580]     Audit-Id: 08c448ae-e25a-4c29-b867-a5570bd6aee8
	I0718 21:09:58.055688    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:58.055691    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:58.055693    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:58.055695    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:58.055704    5402 round_trippers.go:580]     Content-Length: 263
	I0718 21:09:58.055706    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:58 GMT
	I0718 21:09:58.055726    5402 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0718 21:09:58.055779    5402 api_server.go:141] control plane version: v1.30.3
	I0718 21:09:58.055794    5402 api_server.go:131] duration metric: took 4.51614154s to wait for apiserver health ...
	I0718 21:09:58.055801    5402 cni.go:84] Creating CNI manager for ""
	I0718 21:09:58.055804    5402 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0718 21:09:58.079155    5402 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0718 21:09:58.115320    5402 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 21:09:58.120934    5402 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0718 21:09:58.120950    5402 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0718 21:09:58.120956    5402 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0718 21:09:58.120962    5402 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 21:09:58.120965    5402 command_runner.go:130] > Access: 2024-07-19 04:09:17.239306511 +0000
	I0718 21:09:58.120970    5402 command_runner.go:130] > Modify: 2024-07-18 23:04:21.000000000 +0000
	I0718 21:09:58.120974    5402 command_runner.go:130] > Change: 2024-07-19 04:09:15.712306616 +0000
	I0718 21:09:58.120978    5402 command_runner.go:130] >  Birth: -
	I0718 21:09:58.121033    5402 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0718 21:09:58.121040    5402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 21:09:58.148946    5402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 21:09:58.749336    5402 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0718 21:09:58.749351    5402 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0718 21:09:58.749356    5402 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0718 21:09:58.749360    5402 command_runner.go:130] > daemonset.apps/kindnet configured
	I0718 21:09:58.749400    5402 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 21:09:58.749454    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:09:58.749459    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:58.749465    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:58.749469    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:58.755364    5402 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 21:09:58.755377    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:58.755383    5402 round_trippers.go:580]     Audit-Id: 09fbb9ca-7140-4f56-8e7f-3d3135537de8
	I0718 21:09:58.755385    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:58.755388    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:58.755391    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:58.755403    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:58.755406    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:58 GMT
	I0718 21:09:58.756887    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1161"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87618 chars]
	I0718 21:09:58.759964    5402 system_pods.go:59] 12 kube-system pods found
	I0718 21:09:58.759979    5402 system_pods.go:61] "coredns-7db6d8ff4d-76x8d" [55e9cca6-f3d6-4b2f-a8de-df91db8e186a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0718 21:09:58.759984    5402 system_pods.go:61] "etcd-multinode-127000" [4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0718 21:09:58.759989    5402 system_pods.go:61] "kindnet-28cb8" [f603b4ff-800e-40e6-9c53-20626c4dfd35] Running
	I0718 21:09:58.759992    5402 system_pods.go:61] "kindnet-ks8xk" [358f14a8-284b-4570-96d1-d519f18269fa] Running
	I0718 21:09:58.759995    5402 system_pods.go:61] "kindnet-lt5bk" [f81f29e6-917b-4347-ad73-aa9b51320b17] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0718 21:09:58.759998    5402 system_pods.go:61] "kube-apiserver-multinode-127000" [15bce3aa-75a4-4cca-beec-20a4eeed2c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0718 21:09:58.760005    5402 system_pods.go:61] "kube-controller-manager-multinode-127000" [38250320-d12a-418f-867a-05a82f4f876c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0718 21:09:58.760014    5402 system_pods.go:61] "kube-proxy-8j597" [51e85da8-2b18-4373-8f84-65ed52d6bc13] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0718 21:09:58.760017    5402 system_pods.go:61] "kube-proxy-8nvff" [4b740c91-be18-4bc8-9698-0b4fbda8695e] Running
	I0718 21:09:58.760023    5402 system_pods.go:61] "kube-proxy-nxf5m" [e48c420f-b1a1-4a9e-bc7e-fa0d640e5764] Running
	I0718 21:09:58.760027    5402 system_pods.go:61] "kube-scheduler-multinode-127000" [3060259c-364e-4c24-ae43-107cc1973705] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0718 21:09:58.760038    5402 system_pods.go:61] "storage-provisioner" [cd072b88-33f2-4988-985a-f1a00f8eb449] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0718 21:09:58.760043    5402 system_pods.go:74] duration metric: took 10.63518ms to wait for pod list to return data ...
	I0718 21:09:58.760050    5402 node_conditions.go:102] verifying NodePressure condition ...
	I0718 21:09:58.760089    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes
	I0718 21:09:58.760093    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:58.760099    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:58.760102    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:58.761803    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:58.761812    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:58.761817    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:58.761820    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:58.761823    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:58.761826    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:58 GMT
	I0718 21:09:58.761829    5402 round_trippers.go:580]     Audit-Id: c878751b-2510-4c40-b234-3b903dee2914
	I0718 21:09:58.761832    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:58.761921    5402 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1161"},"items":[{"metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0718 21:09:58.762371    5402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 21:09:58.762383    5402 node_conditions.go:123] node cpu capacity is 2
	I0718 21:09:58.762391    5402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 21:09:58.762394    5402 node_conditions.go:123] node cpu capacity is 2
	I0718 21:09:58.762400    5402 node_conditions.go:105] duration metric: took 2.343982ms to run NodePressure ...
	I0718 21:09:58.762409    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:58.922323    5402 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0718 21:09:59.013139    5402 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0718 21:09:59.014290    5402 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0718 21:09:59.014353    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0718 21:09:59.014359    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.014364    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.014369    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.016141    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.016151    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.016156    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.016175    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.016180    5402 round_trippers.go:580]     Audit-Id: fe48c28c-9761-420d-abcb-aa1ce4ad0881
	I0718 21:09:59.016185    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.016188    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.016195    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.016601    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1166"},"items":[{"metadata":{"name":"etcd-multinode-127000","namespace":"kube-system","uid":"4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88","resourceVersion":"1155","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.17:2379","kubernetes.io/config.hash":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.mirror":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30919 chars]
	I0718 21:09:59.018498    5402 kubeadm.go:739] kubelet initialised
	I0718 21:09:59.018510    5402 kubeadm.go:740] duration metric: took 4.210532ms waiting for restarted kubelet to initialise ...
	I0718 21:09:59.018517    5402 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 21:09:59.018555    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:09:59.018561    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.018571    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.018576    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.020778    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:09:59.020794    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.020804    5402 round_trippers.go:580]     Audit-Id: a7c125f4-6813-4db9-9dcd-79a0c4aa4f02
	I0718 21:09:59.020811    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.020818    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.020823    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.020828    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.020833    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.021541    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1166"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87025 chars]
	I0718 21:09:59.023364    5402 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.023407    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:09:59.023412    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.023418    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.023422    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.024748    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.024754    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.024759    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.024762    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.024766    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.024769    5402 round_trippers.go:580]     Audit-Id: 2b8cf813-83e9-413c-9f19-1eb85b059e7f
	I0718 21:09:59.024772    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.024774    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.025030    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:09:59.025270    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.025277    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.025283    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.025285    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.026373    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.026379    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.026384    5402 round_trippers.go:580]     Audit-Id: 42e96ccd-f10e-4e97-9b84-5df50b2079da
	I0718 21:09:59.026389    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.026395    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.026399    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.026403    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.026405    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.026693    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.026875    5402 pod_ready.go:97] node "multinode-127000" hosting pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.026885    5402 pod_ready.go:81] duration metric: took 3.510264ms for pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.026891    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.026898    5402 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.026926    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-127000
	I0718 21:09:59.026931    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.026936    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.026941    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.028065    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.028088    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.028117    5402 round_trippers.go:580]     Audit-Id: e0299f71-819f-4f69-baf4-c57220127541
	I0718 21:09:59.028126    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.028131    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.028139    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.028142    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.028145    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.028366    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-127000","namespace":"kube-system","uid":"4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88","resourceVersion":"1155","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.17:2379","kubernetes.io/config.hash":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.mirror":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0718 21:09:59.028584    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.028591    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.028597    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.028601    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.029813    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.029822    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.029829    5402 round_trippers.go:580]     Audit-Id: 4d346494-cdd6-4318-9ed0-3ed37e0fccbb
	I0718 21:09:59.029834    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.029838    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.029841    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.029846    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.029850    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.029953    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.030124    5402 pod_ready.go:97] node "multinode-127000" hosting pod "etcd-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.030134    5402 pod_ready.go:81] duration metric: took 3.229729ms for pod "etcd-multinode-127000" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.030139    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "etcd-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.030148    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.030175    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-127000
	I0718 21:09:59.030180    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.030185    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.030188    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.031210    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.031219    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.031226    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.031230    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.031253    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.031262    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.031267    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.031272    5402 round_trippers.go:580]     Audit-Id: 34ce49d3-e90b-480b-b73f-33bce15e14d5
	I0718 21:09:59.031372    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-127000","namespace":"kube-system","uid":"15bce3aa-75a4-4cca-beec-20a4eeed2c14","resourceVersion":"1154","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.17:8443","kubernetes.io/config.hash":"adeddd763cb12ff26454c97d2cb34645","kubernetes.io/config.mirror":"adeddd763cb12ff26454c97d2cb34645","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265837Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0718 21:09:59.031621    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.031630    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.031635    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.031640    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.032732    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.032739    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.032743    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.032747    5402 round_trippers.go:580]     Audit-Id: 1b313689-f17e-4acc-a63c-87d3a4f5018f
	I0718 21:09:59.032750    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.032753    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.032756    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.032759    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.032952    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.033106    5402 pod_ready.go:97] node "multinode-127000" hosting pod "kube-apiserver-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.033115    5402 pod_ready.go:81] duration metric: took 2.961848ms for pod "kube-apiserver-multinode-127000" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.033121    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "kube-apiserver-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.033130    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.033158    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-127000
	I0718 21:09:59.033163    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.033168    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.033173    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.034245    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.034253    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.034258    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.034274    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.034282    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.034286    5402 round_trippers.go:580]     Audit-Id: 4c4d07ff-aa50-4ff1-b585-9aea6e7b35e8
	I0718 21:09:59.034289    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.034341    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.034437    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-127000","namespace":"kube-system","uid":"38250320-d12a-418f-867a-05a82f4f876c","resourceVersion":"1157","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"14d5cf7b26b6a66b49878f0b6b5873c6","kubernetes.io/config.mirror":"14d5cf7b26b6a66b49878f0b6b5873c6","kubernetes.io/config.seen":"2024-07-19T04:02:50.143266437Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7729 chars]
	I0718 21:09:59.149980    5402 request.go:629] Waited for 115.176944ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.150034    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.150045    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.150056    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.150065    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.152642    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:09:59.152657    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.152665    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.152684    5402 round_trippers.go:580]     Audit-Id: 92b155d6-d080-4b49-9905-169d50ccf694
	I0718 21:09:59.152696    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.152709    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.152713    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.152718    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.152808    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.153064    5402 pod_ready.go:97] node "multinode-127000" hosting pod "kube-controller-manager-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.153078    5402 pod_ready.go:81] duration metric: took 119.9375ms for pod "kube-controller-manager-multinode-127000" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.153086    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "kube-controller-manager-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.153092    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j597" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.350402    5402 request.go:629] Waited for 197.166967ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8j597
	I0718 21:09:59.350472    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8j597
	I0718 21:09:59.350482    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.350495    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.350502    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.352964    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:09:59.352977    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.352984    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.352988    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.352991    5402 round_trippers.go:580]     Audit-Id: 2292ad80-0425-4b73-937c-fa5ab7918a27
	I0718 21:09:59.352994    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.352999    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.353002    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.353145    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8j597","generateName":"kube-proxy-","namespace":"kube-system","uid":"51e85da8-2b18-4373-8f84-65ed52d6bc13","resourceVersion":"1162","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0718 21:09:59.550218    5402 request.go:629] Waited for 196.67273ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.550333    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.550344    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.550355    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.550362    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.553436    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:09:59.553447    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.553452    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.553455    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.553457    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.553460    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.553462    5402 round_trippers.go:580]     Audit-Id: c5aed568-9d13-48da-ad3f-42d7d640129e
	I0718 21:09:59.553465    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.553552    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.553761    5402 pod_ready.go:97] node "multinode-127000" hosting pod "kube-proxy-8j597" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.553771    5402 pod_ready.go:81] duration metric: took 400.661484ms for pod "kube-proxy-8j597" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.553779    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "kube-proxy-8j597" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.553784    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8nvff" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.750143    5402 request.go:629] Waited for 196.283805ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nvff
	I0718 21:09:59.750331    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nvff
	I0718 21:09:59.750341    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.750353    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.750360    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.753003    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:09:59.753031    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.753044    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.753052    5402 round_trippers.go:580]     Audit-Id: fffc89da-331d-4203-86d7-6713e44e73fb
	I0718 21:09:59.753056    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.753061    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.753065    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.753068    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.753221    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8nvff","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b740c91-be18-4bc8-9698-0b4fbda8695e","resourceVersion":"1110","creationTimestamp":"2024-07-19T04:04:35Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:04:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0718 21:09:59.949756    5402 request.go:629] Waited for 196.192963ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m03
	I0718 21:09:59.949875    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m03
	I0718 21:09:59.949886    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.949898    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.949905    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.952270    5402 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0718 21:09:59.952285    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.952292    5402 round_trippers.go:580]     Audit-Id: ef29f1e3-1adf-4801-baa4-6382d1ffb9f1
	I0718 21:09:59.952297    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.952301    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.952304    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.952308    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.952312    5402 round_trippers.go:580]     Content-Length: 210
	I0718 21:09:59.952316    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:09:59.952329    5402 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-127000-m03\" not found","reason":"NotFound","details":{"name":"multinode-127000-m03","kind":"nodes"},"code":404}
	I0718 21:09:59.952482    5402 pod_ready.go:97] node "multinode-127000-m03" hosting pod "kube-proxy-8nvff" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-127000-m03": nodes "multinode-127000-m03" not found
	I0718 21:09:59.952495    5402 pod_ready.go:81] duration metric: took 398.694196ms for pod "kube-proxy-8nvff" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.952503    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000-m03" hosting pod "kube-proxy-8nvff" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-127000-m03": nodes "multinode-127000-m03" not found
	I0718 21:09:59.952510    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nxf5m" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:00.150340    5402 request.go:629] Waited for 197.77869ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxf5m
	I0718 21:10:00.150459    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxf5m
	I0718 21:10:00.150471    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.150490    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.150501    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:00.153318    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:00.153333    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:00.153339    5402 round_trippers.go:580]     Audit-Id: 737c0b60-d293-42db-a91b-00657bd68555
	I0718 21:10:00.153345    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:00.153349    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:00.153354    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:00.153359    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:00.153364    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:10:00.153524    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nxf5m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e48c420f-b1a1-4a9e-bc7e-fa0d640e5764","resourceVersion":"993","creationTimestamp":"2024-07-19T04:03:47Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0718 21:10:00.350603    5402 request.go:629] Waited for 196.732329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m02
	I0718 21:10:00.350656    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m02
	I0718 21:10:00.350667    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.350735    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:00.350747    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.353211    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:00.353233    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:00.353243    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:00.353256    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:00.353260    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:10:00.353267    5402 round_trippers.go:580]     Audit-Id: fc54a3f1-52b1-48ef-bddf-98c3520948a3
	I0718 21:10:00.353272    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:00.353280    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:00.353368    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000-m02","uid":"7e73463a-ae2d-4a9c-a2b8-e12809583e97","resourceVersion":"1019","creationTimestamp":"2024-07-19T04:07:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_18T21_07_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0718 21:10:00.353604    5402 pod_ready.go:92] pod "kube-proxy-nxf5m" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:00.353615    5402 pod_ready.go:81] duration metric: took 401.085363ms for pod "kube-proxy-nxf5m" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:00.353623    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:00.551225    5402 request.go:629] Waited for 197.401201ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-127000
	I0718 21:10:00.551272    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-127000
	I0718 21:10:00.551281    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.551291    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:00.551297    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.553642    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:00.553654    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:00.553661    5402 round_trippers.go:580]     Audit-Id: 6a4c5928-dec0-4f17-8503-5804b897e380
	I0718 21:10:00.553666    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:00.553669    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:00.553671    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:00.553674    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:00.553678    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:10:00.554054    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-127000","namespace":"kube-system","uid":"3060259c-364e-4c24-ae43-107cc1973705","resourceVersion":"1156","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"746f7833447444339ca9b76cec94dc1f","kubernetes.io/config.mirror":"746f7833447444339ca9b76cec94dc1f","kubernetes.io/config.seen":"2024-07-19T04:02:50.143262549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0718 21:10:00.750848    5402 request.go:629] Waited for 196.463338ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:00.750964    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:00.750973    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.750983    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.750990    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:00.754069    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:00.754095    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:00.754105    5402 round_trippers.go:580]     Audit-Id: fc94a526-9ae7-4e9d-8768-93ea66030c7f
	I0718 21:10:00.754114    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:00.754122    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:00.754127    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:00.754135    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:00.754142    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:10:00.754476    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:00.754742    5402 pod_ready.go:97] node "multinode-127000" hosting pod "kube-scheduler-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:10:00.754756    5402 pod_ready.go:81] duration metric: took 401.099374ms for pod "kube-scheduler-multinode-127000" in "kube-system" namespace to be "Ready" ...
	E0718 21:10:00.754764    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "kube-scheduler-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:10:00.754771    5402 pod_ready.go:38] duration metric: took 1.736195789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 21:10:00.754789    5402 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 21:10:00.764551    5402 command_runner.go:130] > -16
	I0718 21:10:00.764759    5402 ops.go:34] apiserver oom_adj: -16
	I0718 21:10:00.764767    5402 kubeadm.go:597] duration metric: took 10.709068229s to restartPrimaryControlPlane
	I0718 21:10:00.764772    5402 kubeadm.go:394] duration metric: took 10.731426361s to StartCluster
	I0718 21:10:00.764780    5402 settings.go:142] acquiring lock: {Name:mk3b26f3c8475777a106e604fcaf3d840de0df1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:10:00.764869    5402 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 21:10:00.765287    5402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1411/kubeconfig: {Name:mk98b5ca4921c9b1e25bd07d5b44b266493ad1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:10:00.765647    5402 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:10:00.765667    5402 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 21:10:00.765773    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:00.787226    5402 out.go:177] * Verifying Kubernetes components...
	I0718 21:10:00.828712    5402 out.go:177] * Enabled addons: 
	I0718 21:10:00.850006    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:10:00.870886    5402 addons.go:510] duration metric: took 105.22099ms for enable addons: enabled=[]
	I0718 21:10:00.988295    5402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:10:00.998793    5402 node_ready.go:35] waiting up to 6m0s for node "multinode-127000" to be "Ready" ...
	I0718 21:10:00.998857    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:00.998863    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.998869    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.998872    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:01.000235    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:01.000247    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:01.000252    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:01.000256    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:01.000260    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:01.000262    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:01 GMT
	I0718 21:10:01.000265    5402 round_trippers.go:580]     Audit-Id: c91c7da9-6269-4b34-b4be-4cd8871e35cc
	I0718 21:10:01.000268    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:01.000382    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:01.499220    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:01.499247    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:01.499258    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:01.499266    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:01.501785    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:01.501799    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:01.501806    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:01.501811    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:01.501825    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:01 GMT
	I0718 21:10:01.501829    5402 round_trippers.go:580]     Audit-Id: dbaf39f6-96c6-4114-8599-940dc95bcc23
	I0718 21:10:01.501833    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:01.501838    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:01.502028    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:01.999922    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:01.999957    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:02.000051    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:02.000059    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:02.002321    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:02.002336    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:02.002344    5402 round_trippers.go:580]     Audit-Id: 0c925959-8590-4395-a787-940865682434
	I0718 21:10:02.002348    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:02.002353    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:02.002358    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:02.002371    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:02.002377    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:02 GMT
	I0718 21:10:02.002444    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:02.500082    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:02.500111    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:02.500123    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:02.500129    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:02.502519    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:02.502535    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:02.502543    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:02.502546    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:02.502564    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:02.502567    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:02 GMT
	I0718 21:10:02.502571    5402 round_trippers.go:580]     Audit-Id: e5ec95b5-0443-4403-b4e9-454bb3d63920
	I0718 21:10:02.502581    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:02.502937    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:02.999587    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:02.999612    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:02.999624    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:02.999632    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:03.002020    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:03.002034    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:03.002040    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:03.002045    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:03.002049    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:03.002053    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:03 GMT
	I0718 21:10:03.002058    5402 round_trippers.go:580]     Audit-Id: fc74dd5c-877c-4af5-a8ff-2ea1c12bd1dd
	I0718 21:10:03.002062    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:03.002217    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:03.002460    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:03.499589    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:03.499613    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:03.499630    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:03.499637    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:03.501937    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:03.501951    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:03.501959    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:03 GMT
	I0718 21:10:03.501964    5402 round_trippers.go:580]     Audit-Id: a6acbb46-f744-4837-a8be-12aaa08ea891
	I0718 21:10:03.501987    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:03.501996    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:03.502004    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:03.502011    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:03.502084    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:04.000364    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:04.000393    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:04.000405    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:04.000412    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:04.003373    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:04.003388    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:04.003395    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:04.003400    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:04.003404    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:04.003408    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:04.003412    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:04 GMT
	I0718 21:10:04.003415    5402 round_trippers.go:580]     Audit-Id: 8217548d-ba6c-4da5-9252-3c3ea223b4bb
	I0718 21:10:04.003509    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:04.500081    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:04.500103    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:04.500116    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:04.500122    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:04.502844    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:04.502858    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:04.502865    5402 round_trippers.go:580]     Audit-Id: b94a00e3-13f1-4bf8-bdf8-8530afcd0d6a
	I0718 21:10:04.502870    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:04.502874    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:04.502877    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:04.502882    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:04.502885    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:04 GMT
	I0718 21:10:04.503138    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:05.000254    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:05.000282    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:05.000293    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:05.000299    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:05.003114    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:05.003129    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:05.003177    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:05.003190    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:05.003194    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:05 GMT
	I0718 21:10:05.003199    5402 round_trippers.go:580]     Audit-Id: 0402c907-e4c7-490b-8a35-c2acc9370318
	I0718 21:10:05.003203    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:05.003209    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:05.003429    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:05.003716    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:05.500345    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:05.500371    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:05.500383    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:05.500388    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:05.503109    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:05.503124    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:05.503131    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:05.503135    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:05 GMT
	I0718 21:10:05.503139    5402 round_trippers.go:580]     Audit-Id: 92cffdf8-66e6-473e-bc0a-e98418c93b41
	I0718 21:10:05.503142    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:05.503147    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:05.503151    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:05.503427    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:05.999203    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:05.999227    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:05.999239    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:05.999245    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:06.001628    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:06.001666    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:06.001700    5402 round_trippers.go:580]     Audit-Id: dda60b80-bc38-4d54-b754-ade4914707bc
	I0718 21:10:06.001706    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:06.001711    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:06.001716    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:06.001721    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:06.001725    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:06 GMT
	I0718 21:10:06.001988    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:06.500146    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:06.500168    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:06.500180    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:06.500188    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:06.502722    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:06.502735    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:06.502742    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:06.502747    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:06.502750    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:06.502754    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:06.502757    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:06 GMT
	I0718 21:10:06.502763    5402 round_trippers.go:580]     Audit-Id: e72476a2-4955-4590-98b4-195bdf982f06
	I0718 21:10:06.503145    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:07.001293    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:07.001313    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:07.001326    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:07.001334    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:07.003572    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:07.003589    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:07.003597    5402 round_trippers.go:580]     Audit-Id: dd3bccd4-a1ec-47c4-8ae1-4e0dec030e59
	I0718 21:10:07.003603    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:07.003607    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:07.003612    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:07.003617    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:07.003621    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:07 GMT
	I0718 21:10:07.003679    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:07.003918    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:07.501209    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:07.501231    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:07.501244    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:07.501252    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:07.503791    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:07.503805    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:07.503812    5402 round_trippers.go:580]     Audit-Id: c8c996ba-ef58-4679-bfaf-63100839349e
	I0718 21:10:07.503816    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:07.503820    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:07.503854    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:07.503864    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:07.503869    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:07 GMT
	I0718 21:10:07.503981    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:08.000606    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:08.000628    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:08.000640    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:08.000646    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:08.003326    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:08.003341    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:08.003347    5402 round_trippers.go:580]     Audit-Id: aa0fc8ef-4ca2-4c3a-a6e5-de1f6a5c7b98
	I0718 21:10:08.003352    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:08.003355    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:08.003358    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:08.003362    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:08.003365    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:08 GMT
	I0718 21:10:08.003540    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:08.499281    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:08.499304    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:08.499316    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:08.499322    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:08.501546    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:08.501562    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:08.501570    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:08.501574    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:08.501577    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:08.501580    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:08 GMT
	I0718 21:10:08.501583    5402 round_trippers.go:580]     Audit-Id: 6b65fa20-5a8e-4e8e-ab77-1a8d8b2ae467
	I0718 21:10:08.501587    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:08.501703    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:08.999285    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:08.999298    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:08.999304    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:08.999307    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:09.001033    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:09.001043    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:09.001048    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:09.001052    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:09.001054    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:09.001058    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:09.001063    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:09 GMT
	I0718 21:10:09.001068    5402 round_trippers.go:580]     Audit-Id: a7804ac7-bf5f-47ea-a40e-7021c1ed87d4
	I0718 21:10:09.001279    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:09.500127    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:09.500142    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:09.500151    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:09.500155    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:09.502148    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:09.502157    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:09.502166    5402 round_trippers.go:580]     Audit-Id: 57b0beab-cef3-47b9-888d-752a442affd8
	I0718 21:10:09.502170    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:09.502173    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:09.502177    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:09.502180    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:09.502184    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:09 GMT
	I0718 21:10:09.502308    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:09.502496    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:10.000748    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:10.000768    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:10.000779    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:10.000786    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:10.003344    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:10.003357    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:10.003363    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:10.003369    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:10.003372    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:10.003376    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:10.003381    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:10 GMT
	I0718 21:10:10.003384    5402 round_trippers.go:580]     Audit-Id: f9f0f7f5-2f1d-43b8-8571-28c20ebd1ea0
	I0718 21:10:10.003541    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:10.501024    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:10.501051    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:10.501064    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:10.501069    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:10.503795    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:10.503811    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:10.503819    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:10.503824    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:10 GMT
	I0718 21:10:10.503828    5402 round_trippers.go:580]     Audit-Id: 2c98e917-8857-4a2e-9840-1a2873539ac7
	I0718 21:10:10.503831    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:10.503835    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:10.503838    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:10.503938    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:11.000695    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:11.000774    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:11.000788    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:11.000793    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:11.003076    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:11.003091    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:11.003098    5402 round_trippers.go:580]     Audit-Id: 69adac19-e2f5-4c8a-acc2-6ab697611c4d
	I0718 21:10:11.003103    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:11.003108    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:11.003112    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:11.003115    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:11.003127    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:11 GMT
	I0718 21:10:11.003248    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:11.500904    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:11.500932    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:11.500947    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:11.501036    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:11.503821    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:11.503839    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:11.503849    5402 round_trippers.go:580]     Audit-Id: a49d3659-82f3-4d60-be9d-4f012d907a20
	I0718 21:10:11.503855    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:11.503860    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:11.503867    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:11.503870    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:11.503873    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:11 GMT
	I0718 21:10:11.504285    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:11.504538    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:11.999956    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:11.999971    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:11.999980    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:11.999985    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:12.001798    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:12.001825    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:12.001831    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:12 GMT
	I0718 21:10:12.001834    5402 round_trippers.go:580]     Audit-Id: 72284ad4-ee5d-4a67-aeec-d5af2f957e17
	I0718 21:10:12.001838    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:12.001840    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:12.001843    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:12.001845    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:12.001899    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:12.500700    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:12.500726    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:12.500736    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:12.500745    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:12.503390    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:12.503404    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:12.503411    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:12.503415    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:12 GMT
	I0718 21:10:12.503419    5402 round_trippers.go:580]     Audit-Id: 13359f05-5fad-4444-8962-6d3a0737f0ee
	I0718 21:10:12.503423    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:12.503428    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:12.503431    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:12.503572    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:12.999356    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:12.999412    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:12.999423    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:12.999427    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:13.001079    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:13.001089    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:13.001094    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:13.001096    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:13.001099    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:13.001102    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:13 GMT
	I0718 21:10:13.001104    5402 round_trippers.go:580]     Audit-Id: 61dd367b-59e9-421b-b7e4-fa15a7785756
	I0718 21:10:13.001107    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:13.001789    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:13.499737    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:13.499760    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:13.499772    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:13.499779    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:13.502416    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:13.502431    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:13.502439    5402 round_trippers.go:580]     Audit-Id: c49f39bc-c9f9-431f-9fcc-bf73289ae029
	I0718 21:10:13.502443    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:13.502446    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:13.502449    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:13.502453    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:13.502456    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:13 GMT
	I0718 21:10:13.502821    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:13.999639    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:13.999663    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:13.999676    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:13.999682    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:14.002319    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:14.002333    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:14.002340    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:14.002345    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:14.002351    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:14 GMT
	I0718 21:10:14.002354    5402 round_trippers.go:580]     Audit-Id: 1d8d8348-af75-43cc-bb25-5cf4b1b70702
	I0718 21:10:14.002359    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:14.002363    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:14.002692    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:14.002946    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:14.500654    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:14.500683    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:14.500694    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:14.500703    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:14.503639    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:14.503654    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:14.503662    5402 round_trippers.go:580]     Audit-Id: 3601e1a2-f3bf-4906-88aa-54d9dd644e60
	I0718 21:10:14.503670    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:14.503676    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:14.503683    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:14.503697    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:14.503701    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:14 GMT
	I0718 21:10:14.504028    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:15.000251    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:15.000273    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:15.000285    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:15.000291    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:15.002965    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:15.002980    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:15.002987    5402 round_trippers.go:580]     Audit-Id: dd2b4810-1f51-481b-8c18-e75c4450f794
	I0718 21:10:15.002993    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:15.002997    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:15.003002    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:15.003005    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:15.003008    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:15 GMT
	I0718 21:10:15.003096    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:15.499836    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:15.499864    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:15.499955    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:15.499961    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:15.502641    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:15.502655    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:15.502662    5402 round_trippers.go:580]     Audit-Id: 3b6f25b1-8a37-4e9f-a9cb-c6f06cf759cc
	I0718 21:10:15.502667    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:15.502670    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:15.502674    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:15.502678    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:15.502681    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:15 GMT
	I0718 21:10:15.502813    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:15.999947    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:15.999973    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:15.999984    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:15.999991    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:16.002475    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:16.002490    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:16.002497    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:16.002501    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:16.002505    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:16.002508    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:16 GMT
	I0718 21:10:16.002512    5402 round_trippers.go:580]     Audit-Id: f8b75ac1-50e1-4a6c-b823-55c5dd28830d
	I0718 21:10:16.002516    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:16.002681    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:16.500871    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:16.500894    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:16.500906    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:16.500912    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:16.503579    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:16.503634    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:16.503646    5402 round_trippers.go:580]     Audit-Id: b577b064-022f-4f88-b0c4-42d7ba247cbc
	I0718 21:10:16.503651    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:16.503655    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:16.503659    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:16.503663    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:16.503684    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:16 GMT
	I0718 21:10:16.503787    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:16.504036    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:16.999708    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:16.999729    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:16.999740    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:16.999749    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:17.002662    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:17.002679    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:17.002687    5402 round_trippers.go:580]     Audit-Id: e30e1cb9-a52d-4b44-bc39-bb752f99fcb7
	I0718 21:10:17.002691    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:17.002696    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:17.002699    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:17.002704    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:17.002709    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:17 GMT
	I0718 21:10:17.002996    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:17.500523    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:17.500590    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:17.500599    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:17.500605    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:17.503742    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:17.503755    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:17.503760    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:17.503763    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:17.503766    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:17 GMT
	I0718 21:10:17.503769    5402 round_trippers.go:580]     Audit-Id: 97c2b796-5b1e-4c19-97a1-db0da2373f6c
	I0718 21:10:17.503774    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:17.503777    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:17.503837    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:18.000343    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:18.000369    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.000463    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.000474    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.003552    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:18.003570    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.003581    5402 round_trippers.go:580]     Audit-Id: 30b525d9-35c9-4058-b70b-1441a5ee1fdf
	I0718 21:10:18.003589    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.003594    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.003599    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.003604    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.003610    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.003836    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:18.004077    5402 node_ready.go:49] node "multinode-127000" has status "Ready":"True"
	I0718 21:10:18.004094    5402 node_ready.go:38] duration metric: took 17.004775847s for node "multinode-127000" to be "Ready" ...
	I0718 21:10:18.004102    5402 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 21:10:18.004147    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:18.004153    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.004160    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.004165    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.010372    5402 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 21:10:18.010388    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.010397    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.010404    5402 round_trippers.go:580]     Audit-Id: baa2d715-54c9-4ae9-a895-32b741f94048
	I0718 21:10:18.010409    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.010413    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.010417    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.010425    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.011572    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1282"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86051 chars]
	I0718 21:10:18.013391    5402 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:18.013442    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:18.013448    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.013455    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.013459    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.016264    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:18.016273    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.016278    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.016281    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.016284    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.016287    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.016290    5402 round_trippers.go:580]     Audit-Id: 4cfd4886-bcd3-426c-8ad8-db7910c4ddae
	I0718 21:10:18.016293    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.017214    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:18.017480    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:18.017487    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.017493    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.017496    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.021298    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:18.021310    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.021314    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.021351    5402 round_trippers.go:580]     Audit-Id: 1762a68c-3222-4831-a8bf-4e4ecf0046ec
	I0718 21:10:18.021357    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.021360    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.021362    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.021374    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.021449    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:18.514278    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:18.514298    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.514306    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.514310    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.516847    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:18.516857    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.516862    5402 round_trippers.go:580]     Audit-Id: 8c30ccc7-41ed-49e3-b39d-44bfee4628e0
	I0718 21:10:18.516866    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.516870    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.516873    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.516875    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.516878    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.517189    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:18.517467    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:18.517475    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.517480    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.517484    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.518950    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:18.518958    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.518965    5402 round_trippers.go:580]     Audit-Id: 316fd4bd-dbcd-4706-972a-ef0c38aa8baf
	I0718 21:10:18.518969    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.518974    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.518978    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.518984    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.518988    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.519055    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:19.013907    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:19.013930    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:19.013939    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:19.013945    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:19.017112    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:19.017127    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:19.017137    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:19.017156    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:19 GMT
	I0718 21:10:19.017172    5402 round_trippers.go:580]     Audit-Id: 734cc069-c84e-4e5f-bad8-0a32cd34a4c1
	I0718 21:10:19.017180    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:19.017189    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:19.017194    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:19.017383    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:19.017738    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:19.017748    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:19.017755    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:19.017761    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:19.019077    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:19.019086    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:19.019091    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:19.019094    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:19 GMT
	I0718 21:10:19.019106    5402 round_trippers.go:580]     Audit-Id: 78664a5a-93c4-44c6-aab5-2b1073f8c551
	I0718 21:10:19.019113    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:19.019115    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:19.019119    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:19.019181    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:19.514205    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:19.514228    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:19.514240    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:19.514248    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:19.516881    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:19.516911    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:19.516935    5402 round_trippers.go:580]     Audit-Id: f8192e8b-6d2b-4f3f-ad77-ea71d7a77521
	I0718 21:10:19.516949    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:19.516961    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:19.516985    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:19.516992    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:19.516996    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:19 GMT
	I0718 21:10:19.517176    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:19.517528    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:19.517537    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:19.517545    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:19.517552    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:19.518898    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:19.518905    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:19.518910    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:19.518913    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:19.518916    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:19 GMT
	I0718 21:10:19.518918    5402 round_trippers.go:580]     Audit-Id: 4ff45ed1-6204-40b6-83bc-43b823c845b8
	I0718 21:10:19.518921    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:19.518923    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:19.519104    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:20.014931    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:20.014953    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:20.014966    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:20.014972    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:20.017821    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:20.017838    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:20.017846    5402 round_trippers.go:580]     Audit-Id: c29d33d6-1ca7-4768-976c-5ba0ee1d7485
	I0718 21:10:20.017850    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:20.017856    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:20.017860    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:20.017865    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:20.017869    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:20 GMT
	I0718 21:10:20.018054    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:20.018419    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:20.018429    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:20.018438    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:20.018443    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:20.020154    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:20.020163    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:20.020169    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:20.020174    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:20.020179    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:20 GMT
	I0718 21:10:20.020187    5402 round_trippers.go:580]     Audit-Id: 62373308-cc01-4a6f-81ff-e1feb672a3da
	I0718 21:10:20.020194    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:20.020199    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:20.020302    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:20.020475    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:20.514938    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:20.514960    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:20.514972    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:20.514979    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:20.517509    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:20.517521    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:20.517528    5402 round_trippers.go:580]     Audit-Id: 5d3bc4d3-6dca-476b-9eff-b4b50e69351a
	I0718 21:10:20.517534    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:20.517541    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:20.517546    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:20.517551    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:20.517557    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:20 GMT
	I0718 21:10:20.517759    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:20.518121    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:20.518131    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:20.518139    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:20.518144    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:20.519532    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:20.519541    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:20.519545    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:20.519549    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:20.519552    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:20.519555    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:20.519565    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:20 GMT
	I0718 21:10:20.519569    5402 round_trippers.go:580]     Audit-Id: 8f634210-671d-412e-a970-d17a05bdcf46
	I0718 21:10:20.519735    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:21.015019    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:21.015041    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:21.015053    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:21.015059    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:21.017754    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:21.017768    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:21.017775    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:21.017779    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:21 GMT
	I0718 21:10:21.017806    5402 round_trippers.go:580]     Audit-Id: 273eaa1c-3bf9-42f7-bba9-89aab9243831
	I0718 21:10:21.017812    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:21.017817    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:21.017822    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:21.017914    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:21.018279    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:21.018288    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:21.018296    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:21.018299    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:21.019691    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:21.019699    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:21.019705    5402 round_trippers.go:580]     Audit-Id: fd16eefa-ec1d-4456-ac2b-a2d4acaf82e1
	I0718 21:10:21.019709    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:21.019714    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:21.019718    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:21.019721    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:21.019723    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:21 GMT
	I0718 21:10:21.019834    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:21.515168    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:21.515190    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:21.515201    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:21.515207    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:21.517972    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:21.517983    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:21.517989    5402 round_trippers.go:580]     Audit-Id: b7909345-ed7d-42f7-86aa-9698a1863426
	I0718 21:10:21.517995    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:21.518000    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:21.518004    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:21.518008    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:21.518013    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:21 GMT
	I0718 21:10:21.518580    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:21.518931    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:21.518941    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:21.518948    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:21.518954    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:21.525254    5402 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 21:10:21.525267    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:21.525273    5402 round_trippers.go:580]     Audit-Id: 3ed1f7ae-bf47-46e8-aae6-f1a580abdd5b
	I0718 21:10:21.525276    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:21.525279    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:21.525281    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:21.525283    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:21.525287    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:21 GMT
	I0718 21:10:21.525406    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:22.014693    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:22.014716    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:22.014727    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:22.014733    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:22.017558    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:22.017579    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:22.017590    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:22.017598    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:22.017604    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:22.017609    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:22.017618    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:22 GMT
	I0718 21:10:22.017623    5402 round_trippers.go:580]     Audit-Id: ab23d105-dc2a-4588-94ab-73e13b2ed1c8
	I0718 21:10:22.017821    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:22.018200    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:22.018211    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:22.018219    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:22.018223    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:22.019826    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:22.019834    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:22.019840    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:22.019843    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:22.019846    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:22.019849    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:22.019852    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:22 GMT
	I0718 21:10:22.019855    5402 round_trippers.go:580]     Audit-Id: 18410682-4d22-4ffb-942b-6a7146e12419
	I0718 21:10:22.020155    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:22.514466    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:22.514489    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:22.514501    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:22.514506    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:22.516641    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:22.516655    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:22.516662    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:22.516667    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:22.516697    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:22.516707    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:22.516710    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:22 GMT
	I0718 21:10:22.516714    5402 round_trippers.go:580]     Audit-Id: 90762c29-c0bb-490a-a02f-e2633b59233c
	I0718 21:10:22.516834    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:22.517197    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:22.517207    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:22.517213    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:22.517218    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:22.518636    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:22.518646    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:22.518654    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:22.518680    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:22 GMT
	I0718 21:10:22.518687    5402 round_trippers.go:580]     Audit-Id: 4e2c0d39-d197-4420-a139-ea5f26a16943
	I0718 21:10:22.518691    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:22.518695    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:22.518699    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:22.518926    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:22.519142    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:23.013893    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:23.013914    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:23.013972    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:23.013990    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:23.016656    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:23.016671    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:23.016678    5402 round_trippers.go:580]     Audit-Id: 3bfd7f63-983b-4f98-89a9-63c6391a93f8
	I0718 21:10:23.016682    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:23.016686    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:23.016690    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:23.016696    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:23.016700    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:23 GMT
	I0718 21:10:23.016836    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:23.017197    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:23.017207    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:23.017214    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:23.017219    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:23.018749    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:23.018762    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:23.018768    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:23 GMT
	I0718 21:10:23.018773    5402 round_trippers.go:580]     Audit-Id: e798554c-47e0-48c2-b8a8-97d6cc687aa2
	I0718 21:10:23.018776    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:23.018779    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:23.018784    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:23.018788    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:23.018985    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:23.514483    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:23.514505    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:23.514516    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:23.514546    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:23.517145    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:23.517161    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:23.517168    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:23.517181    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:23.517186    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:23.517189    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:23 GMT
	I0718 21:10:23.517192    5402 round_trippers.go:580]     Audit-Id: 9f2c692d-a134-40eb-a88e-4275bef4ebeb
	I0718 21:10:23.517196    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:23.517303    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:23.517659    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:23.517669    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:23.517677    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:23.517682    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:23.519160    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:23.519169    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:23.519174    5402 round_trippers.go:580]     Audit-Id: 78039b5e-627a-4230-a2ad-d4d98ec03005
	I0718 21:10:23.519183    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:23.519188    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:23.519193    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:23.519197    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:23.519199    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:23 GMT
	I0718 21:10:23.519264    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:24.015894    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:24.015918    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:24.015928    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:24.015934    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:24.018450    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:24.018462    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:24.018470    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:24 GMT
	I0718 21:10:24.018476    5402 round_trippers.go:580]     Audit-Id: ec2616a2-32ba-400e-aa72-1b156ae1be8e
	I0718 21:10:24.018482    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:24.018486    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:24.018492    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:24.018499    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:24.018765    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:24.019125    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:24.019135    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:24.019142    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:24.019147    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:24.020557    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:24.020565    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:24.020570    5402 round_trippers.go:580]     Audit-Id: a0ce9a8e-f05f-493d-bb48-da37b2561ae6
	I0718 21:10:24.020573    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:24.020576    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:24.020578    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:24.020581    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:24.020583    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:24 GMT
	I0718 21:10:24.020684    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:24.514857    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:24.514879    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:24.514891    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:24.514898    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:24.517625    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:24.517645    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:24.517656    5402 round_trippers.go:580]     Audit-Id: b0470f56-149f-4a0f-935d-2c898dad6508
	I0718 21:10:24.517663    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:24.517669    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:24.517672    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:24.517676    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:24.517680    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:24 GMT
	I0718 21:10:24.517871    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:24.518232    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:24.518243    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:24.518251    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:24.518257    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:24.519628    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:24.519637    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:24.519642    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:24.519647    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:24.519650    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:24.519653    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:24 GMT
	I0718 21:10:24.519657    5402 round_trippers.go:580]     Audit-Id: a1bcaf63-ab4b-44ca-9e95-80891120857c
	I0718 21:10:24.519660    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:24.519729    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:24.519902    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:25.015089    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:25.015109    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:25.015120    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:25.015127    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:25.017649    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:25.017660    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:25.017665    5402 round_trippers.go:580]     Audit-Id: 9e88d63b-6297-4eba-88ff-bb00fa210573
	I0718 21:10:25.017669    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:25.017671    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:25.017674    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:25.017676    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:25.017679    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:25 GMT
	I0718 21:10:25.017809    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:25.018174    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:25.018194    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:25.018221    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:25.018228    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:25.020050    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:25.020058    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:25.020063    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:25.020067    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:25 GMT
	I0718 21:10:25.020070    5402 round_trippers.go:580]     Audit-Id: 158c095f-56b9-4462-a047-f38e05ce561a
	I0718 21:10:25.020080    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:25.020083    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:25.020086    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:25.020218    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:25.514354    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:25.514395    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:25.514406    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:25.514413    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:25.516975    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:25.516992    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:25.517002    5402 round_trippers.go:580]     Audit-Id: 479b3a4c-eca7-446f-aae2-037ca1e7e119
	I0718 21:10:25.517009    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:25.517016    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:25.517022    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:25.517028    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:25.517033    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:25 GMT
	I0718 21:10:25.517351    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:25.517720    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:25.517730    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:25.517738    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:25.517743    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:25.519107    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:25.519116    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:25.519123    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:25.519128    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:25.519133    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:25 GMT
	I0718 21:10:25.519136    5402 round_trippers.go:580]     Audit-Id: 35f2cfc3-17ac-4bcb-80b1-a620f655bc0b
	I0718 21:10:25.519139    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:25.519142    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:25.519206    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:26.014046    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:26.014069    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:26.014079    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:26.014085    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:26.017322    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:26.017339    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:26.017350    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:26.017358    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:26.017364    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:26.017370    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:26.017376    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:26 GMT
	I0718 21:10:26.017381    5402 round_trippers.go:580]     Audit-Id: 351c6f96-ef6e-49af-a5b5-6a61b6346526
	I0718 21:10:26.017599    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:26.017977    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:26.017986    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:26.017993    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:26.017998    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:26.019574    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:26.019582    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:26.019590    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:26.019593    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:26.019597    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:26 GMT
	I0718 21:10:26.019600    5402 round_trippers.go:580]     Audit-Id: 0f74dfd0-172c-4d56-83a7-d23eed64839f
	I0718 21:10:26.019604    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:26.019608    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:26.019813    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:26.515902    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:26.515946    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:26.515958    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:26.515964    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:26.519178    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:26.519195    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:26.519202    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:26.519216    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:26.519220    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:26.519224    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:26 GMT
	I0718 21:10:26.519229    5402 round_trippers.go:580]     Audit-Id: 6fea295e-ae8c-4f1a-ab25-d6d0ad3d4426
	I0718 21:10:26.519232    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:26.519623    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:26.520004    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:26.520018    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:26.520025    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:26.520031    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:26.521420    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:26.521427    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:26.521434    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:26.521439    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:26.521443    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:26.521447    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:26.521451    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:26 GMT
	I0718 21:10:26.521457    5402 round_trippers.go:580]     Audit-Id: 0525ca34-f642-4751-9343-3fda7a40f62e
	I0718 21:10:26.521597    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:26.521769    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:27.014394    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:27.014436    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:27.014449    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:27.014456    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:27.017270    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:27.017331    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:27.017345    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:27 GMT
	I0718 21:10:27.017354    5402 round_trippers.go:580]     Audit-Id: a5259704-9e04-4af6-b60c-fc532efa6823
	I0718 21:10:27.017359    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:27.017363    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:27.017368    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:27.017372    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:27.017457    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:27.017820    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:27.017829    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:27.017836    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:27.017842    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:27.019171    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:27.019180    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:27.019185    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:27.019188    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:27.019191    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:27.019195    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:27 GMT
	I0718 21:10:27.019205    5402 round_trippers.go:580]     Audit-Id: 45ca80df-c705-4a43-91c1-8d0c73916b70
	I0718 21:10:27.019208    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:27.019268    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:27.513888    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:27.513899    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:27.513904    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:27.513907    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:27.515580    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:27.515592    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:27.515599    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:27.515616    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:27.515637    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:27.515644    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:27.515648    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:27 GMT
	I0718 21:10:27.515651    5402 round_trippers.go:580]     Audit-Id: 9388a1e8-495e-4588-a9f6-f8600e34fcf5
	I0718 21:10:27.515776    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:27.516053    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:27.516060    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:27.516066    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:27.516070    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:27.517004    5402 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 21:10:27.517013    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:27.517018    5402 round_trippers.go:580]     Audit-Id: c28374f6-eacd-4ff0-be53-ea03af6a450c
	I0718 21:10:27.517021    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:27.517025    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:27.517028    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:27.517043    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:27.517061    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:27 GMT
	I0718 21:10:27.517194    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:28.013901    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:28.013919    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:28.013931    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:28.013939    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:28.016324    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:28.016337    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:28.016345    5402 round_trippers.go:580]     Audit-Id: 281c6eef-9bbf-4af8-bb27-090bb956d97f
	I0718 21:10:28.016349    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:28.016353    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:28.016356    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:28.016360    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:28.016364    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:28 GMT
	I0718 21:10:28.016666    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:28.017049    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:28.017058    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:28.017066    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:28.017071    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:28.018605    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:28.018613    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:28.018622    5402 round_trippers.go:580]     Audit-Id: f83f51e6-3074-45dc-98e6-7a303f6ca8a2
	I0718 21:10:28.018627    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:28.018633    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:28.018636    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:28.018640    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:28.018644    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:28 GMT
	I0718 21:10:28.018756    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:28.513919    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:28.513932    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:28.513939    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:28.513942    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:28.515647    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:28.515655    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:28.515660    5402 round_trippers.go:580]     Audit-Id: aca584cb-87d1-4433-b687-bad0692fbd83
	I0718 21:10:28.515663    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:28.515666    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:28.515670    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:28.515676    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:28.515682    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:28 GMT
	I0718 21:10:28.515918    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:28.516207    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:28.516214    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:28.516219    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:28.516223    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:28.520408    5402 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0718 21:10:28.520417    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:28.520423    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:28.520426    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:28.520429    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:28.520432    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:28.520435    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:28 GMT
	I0718 21:10:28.520437    5402 round_trippers.go:580]     Audit-Id: 7a21bfcf-272b-4f2b-9902-ddff80397717
	I0718 21:10:28.520575    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:29.015146    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:29.015172    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:29.015182    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:29.015188    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:29.017924    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:29.017937    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:29.017943    5402 round_trippers.go:580]     Audit-Id: 147a57d4-fb29-46e8-ae73-4c2645307d77
	I0718 21:10:29.017947    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:29.017951    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:29.017954    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:29.017957    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:29.017999    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:29 GMT
	I0718 21:10:29.018400    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:29.018747    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:29.018756    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:29.018764    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:29.018769    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:29.020133    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:29.020141    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:29.020148    5402 round_trippers.go:580]     Audit-Id: 25eb8f52-a76c-421b-8177-08516cc946d0
	I0718 21:10:29.020153    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:29.020158    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:29.020162    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:29.020165    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:29.020168    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:29 GMT
	I0718 21:10:29.020229    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:29.020397    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:29.514185    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:29.514196    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:29.514205    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:29.514208    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:29.515844    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:29.515854    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:29.515860    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:29 GMT
	I0718 21:10:29.515862    5402 round_trippers.go:580]     Audit-Id: 7ce3860a-0ef8-49d0-af33-90506b97a3ba
	I0718 21:10:29.515865    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:29.515867    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:29.515870    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:29.515872    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:29.515966    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:29.516259    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:29.516267    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:29.516273    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:29.516276    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:29.517487    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:29.517496    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:29.517500    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:29.517503    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:29.517507    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:29.517509    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:29.517512    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:29 GMT
	I0718 21:10:29.517515    5402 round_trippers.go:580]     Audit-Id: 12584d93-9071-4ed6-b688-48b24c5ed3b3
	I0718 21:10:29.517756    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.013909    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:30.013960    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.013965    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.013968    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.015752    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.015762    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.015767    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.015771    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.015773    5402 round_trippers.go:580]     Audit-Id: 9be1232e-6674-47e1-95ab-c1bbe338685f
	I0718 21:10:30.015776    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.015778    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.015781    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.015836    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:30.016118    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.016126    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.016131    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.016134    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.017505    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.017530    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.017551    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.017557    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.017559    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.017562    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.017565    5402 round_trippers.go:580]     Audit-Id: c27764f4-da70-4eef-b36d-83b0996c892b
	I0718 21:10:30.017568    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.017905    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.515137    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:30.515164    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.515177    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.515186    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.517792    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:30.517811    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.517819    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.517824    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.517828    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.517833    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.517837    5402 round_trippers.go:580]     Audit-Id: f9c4f12b-9a6b-48e7-bcd2-63e9393b6422
	I0718 21:10:30.517840    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.517936    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1304","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0718 21:10:30.518316    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.518326    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.518333    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.518337    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.519904    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.519913    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.519918    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.519921    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.519926    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.519928    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.519931    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.519934    5402 round_trippers.go:580]     Audit-Id: a0b93586-011f-4d75-a2aa-ab5d59412098
	I0718 21:10:30.520018    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.520211    5402 pod_ready.go:92] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.520220    5402 pod_ready.go:81] duration metric: took 12.506446695s for pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.520226    5402 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.520257    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-127000
	I0718 21:10:30.520262    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.520267    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.520271    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.521418    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.521428    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.521433    5402 round_trippers.go:580]     Audit-Id: 4f2db715-bf9a-4abd-80e1-937d58db2e88
	I0718 21:10:30.521436    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.521450    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.521455    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.521457    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.521484    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.521611    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-127000","namespace":"kube-system","uid":"4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88","resourceVersion":"1241","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.17:2379","kubernetes.io/config.hash":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.mirror":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0718 21:10:30.521822    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.521828    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.521834    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.521837    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.523142    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.523149    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.523153    5402 round_trippers.go:580]     Audit-Id: 74420363-7ff4-4026-9014-8270e3825bb6
	I0718 21:10:30.523156    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.523159    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.523162    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.523164    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.523166    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.523484    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.523650    5402 pod_ready.go:92] pod "etcd-multinode-127000" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.523658    5402 pod_ready.go:81] duration metric: took 3.425685ms for pod "etcd-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.523668    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.523712    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-127000
	I0718 21:10:30.523717    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.523723    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.523727    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.524843    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.524852    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.524857    5402 round_trippers.go:580]     Audit-Id: d160e265-02ee-46fc-8d11-17a48e178963
	I0718 21:10:30.524860    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.524863    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.524866    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.524869    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.524872    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.525090    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-127000","namespace":"kube-system","uid":"15bce3aa-75a4-4cca-beec-20a4eeed2c14","resourceVersion":"1272","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.17:8443","kubernetes.io/config.hash":"adeddd763cb12ff26454c97d2cb34645","kubernetes.io/config.mirror":"adeddd763cb12ff26454c97d2cb34645","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265837Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0718 21:10:30.525320    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.525327    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.525332    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.525336    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.528297    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:30.528303    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.528308    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.528311    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.528313    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.528316    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.528319    5402 round_trippers.go:580]     Audit-Id: 4f60f004-b1db-4418-8637-dff6740cc8cb
	I0718 21:10:30.528322    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.528566    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.528728    5402 pod_ready.go:92] pod "kube-apiserver-multinode-127000" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.528735    5402 pod_ready.go:81] duration metric: took 5.06175ms for pod "kube-apiserver-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.528743    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.528773    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-127000
	I0718 21:10:30.528777    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.528784    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.528787    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.531302    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:30.531310    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.531315    5402 round_trippers.go:580]     Audit-Id: c19db514-5ec1-4c55-afda-e6884af87d1c
	I0718 21:10:30.531318    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.531322    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.531326    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.531328    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.531330    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.531621    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-127000","namespace":"kube-system","uid":"38250320-d12a-418f-867a-05a82f4f876c","resourceVersion":"1251","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"14d5cf7b26b6a66b49878f0b6b5873c6","kubernetes.io/config.mirror":"14d5cf7b26b6a66b49878f0b6b5873c6","kubernetes.io/config.seen":"2024-07-19T04:02:50.143266437Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7467 chars]
	I0718 21:10:30.531867    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.531873    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.531878    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.531882    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.533311    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.533318    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.533323    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.533325    5402 round_trippers.go:580]     Audit-Id: a4ca03a9-9f48-462f-8995-b8453aa7ca09
	I0718 21:10:30.533328    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.533331    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.533333    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.533337    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.533455    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.533626    5402 pod_ready.go:92] pod "kube-controller-manager-multinode-127000" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.533633    5402 pod_ready.go:81] duration metric: took 4.885281ms for pod "kube-controller-manager-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.533646    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j597" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.533672    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8j597
	I0718 21:10:30.533677    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.533682    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.533687    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.535055    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.535062    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.535067    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.535070    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.535073    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.535090    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.535095    5402 round_trippers.go:580]     Audit-Id: e0f8c5a0-3a97-4ccc-ab65-569fd2c0a88e
	I0718 21:10:30.535101    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.535416    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8j597","generateName":"kube-proxy-","namespace":"kube-system","uid":"51e85da8-2b18-4373-8f84-65ed52d6bc13","resourceVersion":"1162","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0718 21:10:30.535655    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.535662    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.535668    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.535670    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.536855    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.536862    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.536867    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.536876    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.536879    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.536882    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.536885    5402 round_trippers.go:580]     Audit-Id: 7989778c-5268-4207-9bcd-c3238442a1b7
	I0718 21:10:30.536887    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.536998    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.537165    5402 pod_ready.go:92] pod "kube-proxy-8j597" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.537172    5402 pod_ready.go:81] duration metric: took 3.521164ms for pod "kube-proxy-8j597" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.537183    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8nvff" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.716043    5402 request.go:629] Waited for 178.781715ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nvff
	I0718 21:10:30.716214    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nvff
	I0718 21:10:30.716225    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.716235    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.716243    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.718855    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:30.718870    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.718881    5402 round_trippers.go:580]     Audit-Id: e89f1c54-25b7-4a8f-8797-cc1601990cde
	I0718 21:10:30.718889    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.718896    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.718901    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.718906    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.718914    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.719090    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8nvff","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b740c91-be18-4bc8-9698-0b4fbda8695e","resourceVersion":"1110","creationTimestamp":"2024-07-19T04:04:35Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:04:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0718 21:10:30.916443    5402 request.go:629] Waited for 197.0029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m03
	I0718 21:10:30.916560    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m03
	I0718 21:10:30.916571    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.916583    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.916593    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.919137    5402 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0718 21:10:30.919151    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.919158    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:30.919163    5402 round_trippers.go:580]     Audit-Id: 3e32e6ae-fa8a-41f5-b77a-e81a425989dd
	I0718 21:10:30.919188    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.919195    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.919198    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.919201    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.919205    5402 round_trippers.go:580]     Content-Length: 210
	I0718 21:10:30.919233    5402 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-127000-m03\" not found","reason":"NotFound","details":{"name":"multinode-127000-m03","kind":"nodes"},"code":404}
	I0718 21:10:30.919295    5402 pod_ready.go:97] node "multinode-127000-m03" hosting pod "kube-proxy-8nvff" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-127000-m03": nodes "multinode-127000-m03" not found
	I0718 21:10:30.919308    5402 pod_ready.go:81] duration metric: took 382.107283ms for pod "kube-proxy-8nvff" in "kube-system" namespace to be "Ready" ...
	E0718 21:10:30.919323    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000-m03" hosting pod "kube-proxy-8nvff" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-127000-m03": nodes "multinode-127000-m03" not found
	I0718 21:10:30.919331    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nxf5m" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:31.116436    5402 request.go:629] Waited for 197.053347ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxf5m
	I0718 21:10:31.116635    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxf5m
	I0718 21:10:31.116647    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.116658    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.116666    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.119218    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:31.119232    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.119240    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.119243    5402 round_trippers.go:580]     Audit-Id: 3e0578f4-7a04-4fb9-88aa-2566ba6d076d
	I0718 21:10:31.119246    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.119251    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.119254    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.119257    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.119490    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nxf5m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e48c420f-b1a1-4a9e-bc7e-fa0d640e5764","resourceVersion":"993","creationTimestamp":"2024-07-19T04:03:47Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0718 21:10:31.315922    5402 request.go:629] Waited for 196.097926ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m02
	I0718 21:10:31.316050    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m02
	I0718 21:10:31.316060    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.316070    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.316077    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.318688    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:31.318704    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.318711    5402 round_trippers.go:580]     Audit-Id: b7d79a4e-0f18-41da-b2a1-191205ade99f
	I0718 21:10:31.318715    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.318719    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.318722    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.318727    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.318733    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.318814    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000-m02","uid":"7e73463a-ae2d-4a9c-a2b8-e12809583e97","resourceVersion":"1019","creationTimestamp":"2024-07-19T04:07:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_18T21_07_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0718 21:10:31.319038    5402 pod_ready.go:92] pod "kube-proxy-nxf5m" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:31.319049    5402 pod_ready.go:81] duration metric: took 399.698182ms for pod "kube-proxy-nxf5m" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:31.319057    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:31.516543    5402 request.go:629] Waited for 197.436038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-127000
	I0718 21:10:31.516674    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-127000
	I0718 21:10:31.516688    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.516697    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.516703    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.519138    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:31.519151    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.519158    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.519162    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.519166    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.519168    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.519172    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.519177    5402 round_trippers.go:580]     Audit-Id: b965ef6e-9342-4a2e-9db6-5beadfcfd87b
	I0718 21:10:31.519355    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-127000","namespace":"kube-system","uid":"3060259c-364e-4c24-ae43-107cc1973705","resourceVersion":"1268","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"746f7833447444339ca9b76cec94dc1f","kubernetes.io/config.mirror":"746f7833447444339ca9b76cec94dc1f","kubernetes.io/config.seen":"2024-07-19T04:02:50.143262549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0718 21:10:31.715822    5402 request.go:629] Waited for 196.078642ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:31.715865    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:31.715873    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.715882    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.715887    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.717353    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:31.717361    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.717365    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.717369    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.717371    5402 round_trippers.go:580]     Audit-Id: c2b7c130-13ea-4a55-8cb2-9153e81bf749
	I0718 21:10:31.717374    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.717377    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.717380    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.717586    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:31.717794    5402 pod_ready.go:92] pod "kube-scheduler-multinode-127000" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:31.717803    5402 pod_ready.go:81] duration metric: took 398.728219ms for pod "kube-scheduler-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:31.717810    5402 pod_ready.go:38] duration metric: took 13.713291891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 21:10:31.717821    5402 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:10:31.717873    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:10:31.729465    5402 command_runner.go:130] > 1628
	I0718 21:10:31.729708    5402 api_server.go:72] duration metric: took 30.963125586s to wait for apiserver process to appear ...
	I0718 21:10:31.729717    5402 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:10:31.729732    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:10:31.733204    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 200:
	ok
	I0718 21:10:31.733233    5402 round_trippers.go:463] GET https://192.169.0.17:8443/version
	I0718 21:10:31.733238    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.733243    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.733247    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.733811    5402 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 21:10:31.733819    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.733824    5402 round_trippers.go:580]     Audit-Id: ef96c85b-1293-47b4-9817-f2eb7f350539
	I0718 21:10:31.733828    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.733831    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.733835    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.733837    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.733841    5402 round_trippers.go:580]     Content-Length: 263
	I0718 21:10:31.733845    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.733857    5402 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0718 21:10:31.733878    5402 api_server.go:141] control plane version: v1.30.3
	I0718 21:10:31.733886    5402 api_server.go:131] duration metric: took 4.161354ms to wait for apiserver health ...
	I0718 21:10:31.733891    5402 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 21:10:31.916683    5402 request.go:629] Waited for 182.720249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:31.916763    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:31.916775    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.916786    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.916794    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.920682    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:31.920697    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.920704    5402 round_trippers.go:580]     Audit-Id: 1b3bf7b3-e4a9-4b19-b4fc-ab8057c6af44
	I0718 21:10:31.920708    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.920712    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.920716    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.920719    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.920723    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:32 GMT
	I0718 21:10:31.921862    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1311"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1304","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86411 chars]
	I0718 21:10:31.923756    5402 system_pods.go:59] 12 kube-system pods found
	I0718 21:10:31.923766    5402 system_pods.go:61] "coredns-7db6d8ff4d-76x8d" [55e9cca6-f3d6-4b2f-a8de-df91db8e186a] Running
	I0718 21:10:31.923770    5402 system_pods.go:61] "etcd-multinode-127000" [4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88] Running
	I0718 21:10:31.923782    5402 system_pods.go:61] "kindnet-28cb8" [f603b4ff-800e-40e6-9c53-20626c4dfd35] Running
	I0718 21:10:31.923786    5402 system_pods.go:61] "kindnet-ks8xk" [358f14a8-284b-4570-96d1-d519f18269fa] Running
	I0718 21:10:31.923789    5402 system_pods.go:61] "kindnet-lt5bk" [f81f29e6-917b-4347-ad73-aa9b51320b17] Running
	I0718 21:10:31.923792    5402 system_pods.go:61] "kube-apiserver-multinode-127000" [15bce3aa-75a4-4cca-beec-20a4eeed2c14] Running
	I0718 21:10:31.923795    5402 system_pods.go:61] "kube-controller-manager-multinode-127000" [38250320-d12a-418f-867a-05a82f4f876c] Running
	I0718 21:10:31.923798    5402 system_pods.go:61] "kube-proxy-8j597" [51e85da8-2b18-4373-8f84-65ed52d6bc13] Running
	I0718 21:10:31.923801    5402 system_pods.go:61] "kube-proxy-8nvff" [4b740c91-be18-4bc8-9698-0b4fbda8695e] Running
	I0718 21:10:31.923803    5402 system_pods.go:61] "kube-proxy-nxf5m" [e48c420f-b1a1-4a9e-bc7e-fa0d640e5764] Running
	I0718 21:10:31.923805    5402 system_pods.go:61] "kube-scheduler-multinode-127000" [3060259c-364e-4c24-ae43-107cc1973705] Running
	I0718 21:10:31.923809    5402 system_pods.go:61] "storage-provisioner" [cd072b88-33f2-4988-985a-f1a00f8eb449] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0718 21:10:31.923814    5402 system_pods.go:74] duration metric: took 189.912921ms to wait for pod list to return data ...
	I0718 21:10:31.923824    5402 default_sa.go:34] waiting for default service account to be created ...
	I0718 21:10:32.115683    5402 request.go:629] Waited for 191.790892ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/default/serviceaccounts
	I0718 21:10:32.115867    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/default/serviceaccounts
	I0718 21:10:32.115878    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:32.115889    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:32.115897    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:32.118782    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:32.118795    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:32.118802    5402 round_trippers.go:580]     Audit-Id: bea34ab0-83a1-4c36-8484-ce99f2f99ef5
	I0718 21:10:32.118806    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:32.118810    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:32.118814    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:32.118817    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:32.118822    5402 round_trippers.go:580]     Content-Length: 262
	I0718 21:10:32.118825    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:32 GMT
	I0718 21:10:32.118850    5402 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1311"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5f5cf476-ad0a-497e-bf9b-7a00ccdfe7cb","resourceVersion":"307","creationTimestamp":"2024-07-19T04:03:03Z"}}]}
	I0718 21:10:32.118980    5402 default_sa.go:45] found service account: "default"
	I0718 21:10:32.118992    5402 default_sa.go:55] duration metric: took 195.156298ms for default service account to be created ...
	I0718 21:10:32.118999    5402 system_pods.go:116] waiting for k8s-apps to be running ...
	I0718 21:10:32.316459    5402 request.go:629] Waited for 197.411151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:32.316624    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:32.316636    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:32.316648    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:32.316654    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:32.320966    5402 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0718 21:10:32.320993    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:32.321000    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:32.321005    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:32.321009    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:32 GMT
	I0718 21:10:32.321013    5402 round_trippers.go:580]     Audit-Id: 97783e68-54b0-4f9d-b7ea-fa80d5c4bcf2
	I0718 21:10:32.321017    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:32.321020    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:32.321862    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1311"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1304","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86411 chars]
	I0718 21:10:32.323777    5402 system_pods.go:86] 12 kube-system pods found
	I0718 21:10:32.323786    5402 system_pods.go:89] "coredns-7db6d8ff4d-76x8d" [55e9cca6-f3d6-4b2f-a8de-df91db8e186a] Running
	I0718 21:10:32.323790    5402 system_pods.go:89] "etcd-multinode-127000" [4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88] Running
	I0718 21:10:32.323793    5402 system_pods.go:89] "kindnet-28cb8" [f603b4ff-800e-40e6-9c53-20626c4dfd35] Running
	I0718 21:10:32.323797    5402 system_pods.go:89] "kindnet-ks8xk" [358f14a8-284b-4570-96d1-d519f18269fa] Running
	I0718 21:10:32.323801    5402 system_pods.go:89] "kindnet-lt5bk" [f81f29e6-917b-4347-ad73-aa9b51320b17] Running
	I0718 21:10:32.323804    5402 system_pods.go:89] "kube-apiserver-multinode-127000" [15bce3aa-75a4-4cca-beec-20a4eeed2c14] Running
	I0718 21:10:32.323807    5402 system_pods.go:89] "kube-controller-manager-multinode-127000" [38250320-d12a-418f-867a-05a82f4f876c] Running
	I0718 21:10:32.323810    5402 system_pods.go:89] "kube-proxy-8j597" [51e85da8-2b18-4373-8f84-65ed52d6bc13] Running
	I0718 21:10:32.323813    5402 system_pods.go:89] "kube-proxy-8nvff" [4b740c91-be18-4bc8-9698-0b4fbda8695e] Running
	I0718 21:10:32.323819    5402 system_pods.go:89] "kube-proxy-nxf5m" [e48c420f-b1a1-4a9e-bc7e-fa0d640e5764] Running
	I0718 21:10:32.323822    5402 system_pods.go:89] "kube-scheduler-multinode-127000" [3060259c-364e-4c24-ae43-107cc1973705] Running
	I0718 21:10:32.323828    5402 system_pods.go:89] "storage-provisioner" [cd072b88-33f2-4988-985a-f1a00f8eb449] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0718 21:10:32.323833    5402 system_pods.go:126] duration metric: took 204.823908ms to wait for k8s-apps to be running ...
	I0718 21:10:32.323841    5402 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 21:10:32.323888    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 21:10:32.334816    5402 system_svc.go:56] duration metric: took 10.972843ms WaitForService to wait for kubelet
	I0718 21:10:32.334831    5402 kubeadm.go:582] duration metric: took 31.568232076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:10:32.334849    5402 node_conditions.go:102] verifying NodePressure condition ...
	I0718 21:10:32.515263    5402 request.go:629] Waited for 180.343781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes
	I0718 21:10:32.515304    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes
	I0718 21:10:32.515309    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:32.515314    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:32.515318    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:32.516974    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:32.516984    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:32.516993    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:32.516996    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:32.517000    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:32.517003    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:32.517006    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:32 GMT
	I0718 21:10:32.517010    5402 round_trippers.go:580]     Audit-Id: 5d131641-1a51-4ec3-a8dc-ea2e9525673c
	I0718 21:10:32.517127    5402 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1311"},"items":[{"metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0718 21:10:32.517446    5402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 21:10:32.517454    5402 node_conditions.go:123] node cpu capacity is 2
	I0718 21:10:32.517461    5402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 21:10:32.517466    5402 node_conditions.go:123] node cpu capacity is 2
	I0718 21:10:32.517470    5402 node_conditions.go:105] duration metric: took 182.610645ms to run NodePressure ...
	I0718 21:10:32.517477    5402 start.go:241] waiting for startup goroutines ...
	I0718 21:10:32.517483    5402 start.go:246] waiting for cluster config update ...
	I0718 21:10:32.517489    5402 start.go:255] writing updated cluster config ...
	I0718 21:10:32.541277    5402 out.go:177] 
	I0718 21:10:32.563564    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:32.563691    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:10:32.586168    5402 out.go:177] * Starting "multinode-127000-m02" worker node in "multinode-127000" cluster
	I0718 21:10:32.628997    5402 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:10:32.629023    5402 cache.go:56] Caching tarball of preloaded images
	I0718 21:10:32.629165    5402 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:10:32.629178    5402 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:10:32.629263    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:10:32.629871    5402 start.go:360] acquireMachinesLock for multinode-127000-m02: {Name:mk8a0ac4b11cd5d9eba5ac8b9ae33317742f9112 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:10:32.629936    5402 start.go:364] duration metric: took 48.568µs to acquireMachinesLock for "multinode-127000-m02"
	I0718 21:10:32.629954    5402 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:10:32.629960    5402 fix.go:54] fixHost starting: m02
	I0718 21:10:32.630261    5402 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:10:32.630278    5402 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:10:32.639255    5402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53451
	I0718 21:10:32.639596    5402 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:10:32.639981    5402 main.go:141] libmachine: Using API Version  1
	I0718 21:10:32.639998    5402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:10:32.640248    5402 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:10:32.640378    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:10:32.640498    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetState
	I0718 21:10:32.640605    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:10:32.640673    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid from json: 5340
	I0718 21:10:32.641600    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid 5340 missing from process table
	I0718 21:10:32.641631    5402 fix.go:112] recreateIfNeeded on multinode-127000-m02: state=Stopped err=<nil>
	I0718 21:10:32.641642    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	W0718 21:10:32.641719    5402 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:10:32.663372    5402 out.go:177] * Restarting existing hyperkit VM for "multinode-127000-m02" ...
	I0718 21:10:32.704966    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .Start
	I0718 21:10:32.705130    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:10:32.705157    5402 main.go:141] libmachine: (multinode-127000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/hyperkit.pid
	I0718 21:10:32.706118    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid 5340 missing from process table
	I0718 21:10:32.706129    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | pid 5340 is in state "Stopped"
	I0718 21:10:32.706139    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/hyperkit.pid...
	I0718 21:10:32.706374    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Using UUID e9cb8dfe-c218-475a-adda-766363901a8e
	I0718 21:10:32.731699    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Generated MAC 3a:d2:59:42:45:2c
	I0718 21:10:32.731721    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000
	I0718 21:10:32.731872    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e9cb8dfe-c218-475a-adda-766363901a8e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0718 21:10:32.731913    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e9cb8dfe-c218-475a-adda-766363901a8e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0718 21:10:32.731961    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e9cb8dfe-c218-475a-adda-766363901a8e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/multinode-127000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/bzimage,/Users/j
enkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000"}
	I0718 21:10:32.732011    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e9cb8dfe-c218-475a-adda-766363901a8e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/multinode-127000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/bzimage,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/mult
inode-127000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000"
	I0718 21:10:32.732023    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0718 21:10:32.733410    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: Pid is 5426
	I0718 21:10:32.733827    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Attempt 0
	I0718 21:10:32.733846    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:10:32.733915    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid from json: 5426
	I0718 21:10:32.735712    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Searching for 3a:d2:59:42:45:2c in /var/db/dhcpd_leases ...
	I0718 21:10:32.735800    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0718 21:10:32.735814    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:d2:e2:11:67:74:1c ID:1,d2:e2:11:67:74:1c Lease:0x669b386d}
	I0718 21:10:32.735824    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:4c:de:4f:d8:27 ID:1,6:4c:de:4f:d8:27 Lease:0x6699e6d1}
	I0718 21:10:32.735831    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:d2:59:42:45:2c ID:1,3a:d2:59:42:45:2c Lease:0x669b37f6}
	I0718 21:10:32.735839    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Found match: 3a:d2:59:42:45:2c
	I0718 21:10:32.735848    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | IP: 192.169.0.18
	I0718 21:10:32.735905    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetConfigRaw
	I0718 21:10:32.741609    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0718 21:10:32.758154    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetIP
	I0718 21:10:32.758555    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:10:32.759275    5402 machine.go:94] provisionDockerMachine start ...
	I0718 21:10:32.759310    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:10:32.759463    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:10:32.759587    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:10:32.759722    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:10:32.759872    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:10:32.760011    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:10:32.760185    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:10:32.760410    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:10:32.760421    5402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 21:10:32.767227    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0718 21:10:32.768518    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:10:32.768559    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:10:32.768587    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:10:32.768604    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:10:33.149383    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0718 21:10:33.149398    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0718 21:10:33.264080    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:10:33.264101    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:10:33.264122    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:10:33.264133    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:10:33.264925    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0718 21:10:33.264936    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0718 21:10:38.544157    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0718 21:10:38.544248    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0718 21:10:38.544256    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0718 21:10:38.568627    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0718 21:11:07.824870    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 21:11:07.824885    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetMachineName
	I0718 21:11:07.825005    5402 buildroot.go:166] provisioning hostname "multinode-127000-m02"
	I0718 21:11:07.825016    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetMachineName
	I0718 21:11:07.825103    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:07.825194    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:07.825288    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:07.825382    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:07.825463    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:07.825643    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:07.825825    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:07.825834    5402 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-127000-m02 && echo "multinode-127000-m02" | sudo tee /etc/hostname
	I0718 21:11:07.888187    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-127000-m02
	
	I0718 21:11:07.888202    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:07.888332    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:07.888430    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:07.888520    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:07.888631    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:07.888752    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:07.888897    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:07.888909    5402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-127000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-127000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-127000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 21:11:07.947668    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:11:07.947684    5402 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1411/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1411/.minikube}
	I0718 21:11:07.947698    5402 buildroot.go:174] setting up certificates
	I0718 21:11:07.947704    5402 provision.go:84] configureAuth start
	I0718 21:11:07.947712    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetMachineName
	I0718 21:11:07.947850    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetIP
	I0718 21:11:07.947947    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:07.948028    5402 provision.go:143] copyHostCerts
	I0718 21:11:07.948055    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem
	I0718 21:11:07.948119    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem, removing ...
	I0718 21:11:07.948125    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem
	I0718 21:11:07.948300    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem (1123 bytes)
	I0718 21:11:07.948505    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem
	I0718 21:11:07.948551    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem, removing ...
	I0718 21:11:07.948556    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem
	I0718 21:11:07.948644    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem (1675 bytes)
	I0718 21:11:07.948785    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem
	I0718 21:11:07.948825    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem, removing ...
	I0718 21:11:07.948830    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem
	I0718 21:11:07.948909    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem (1082 bytes)
	I0718 21:11:07.949049    5402 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem org=jenkins.multinode-127000-m02 san=[127.0.0.1 192.169.0.18 localhost minikube multinode-127000-m02]
	I0718 21:11:08.143659    5402 provision.go:177] copyRemoteCerts
	I0718 21:11:08.143717    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 21:11:08.143744    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:08.143882    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:08.143967    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.144061    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:08.144155    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/id_rsa Username:docker}
	I0718 21:11:08.176532    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 21:11:08.176606    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 21:11:08.195955    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 21:11:08.196020    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0718 21:11:08.215815    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 21:11:08.215883    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 21:11:08.235156    5402 provision.go:87] duration metric: took 287.432646ms to configureAuth
	I0718 21:11:08.235178    5402 buildroot.go:189] setting minikube options for container-runtime
	I0718 21:11:08.235377    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:11:08.235407    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:08.235539    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:08.235628    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:08.235718    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.235809    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.235899    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:08.236024    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:08.236155    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:08.236163    5402 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 21:11:08.288129    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 21:11:08.288142    5402 buildroot.go:70] root file system type: tmpfs
	I0718 21:11:08.288216    5402 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 21:11:08.288227    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:08.288361    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:08.288460    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.288564    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.288655    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:08.288801    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:08.288943    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:08.288985    5402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.17"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 21:11:08.350933    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.17
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 21:11:08.350948    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:08.351082    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:08.351174    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.351259    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.351365    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:08.351490    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:08.351632    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:08.351644    5402 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 21:11:09.927444    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 21:11:09.927468    5402 machine.go:97] duration metric: took 37.167077876s to provisionDockerMachine
	I0718 21:11:09.927475    5402 start.go:293] postStartSetup for "multinode-127000-m02" (driver="hyperkit")
	I0718 21:11:09.927485    5402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 21:11:09.927497    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:09.927694    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 21:11:09.927706    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:09.927800    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:09.927889    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:09.927977    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:09.928059    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/id_rsa Username:docker}
	I0718 21:11:09.961907    5402 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 21:11:09.964865    5402 command_runner.go:130] > NAME=Buildroot
	I0718 21:11:09.964873    5402 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0718 21:11:09.964877    5402 command_runner.go:130] > ID=buildroot
	I0718 21:11:09.964881    5402 command_runner.go:130] > VERSION_ID=2023.02.9
	I0718 21:11:09.964885    5402 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0718 21:11:09.965015    5402 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 21:11:09.965025    5402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/addons for local assets ...
	I0718 21:11:09.965128    5402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/files for local assets ...
	I0718 21:11:09.965315    5402 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> 19482.pem in /etc/ssl/certs
	I0718 21:11:09.965322    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> /etc/ssl/certs/19482.pem
	I0718 21:11:09.965532    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 21:11:09.973514    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem --> /etc/ssl/certs/19482.pem (1708 bytes)
	I0718 21:11:09.992309    5402 start.go:296] duration metric: took 64.820952ms for postStartSetup
	I0718 21:11:09.992329    5402 fix.go:56] duration metric: took 37.361260324s for fixHost
	I0718 21:11:09.992345    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:09.992477    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:09.992556    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:09.992651    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:09.992749    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:09.992862    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:09.993002    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:09.993012    5402 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0718 21:11:10.045137    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362269.886236030
	
	I0718 21:11:10.045150    5402 fix.go:216] guest clock: 1721362269.886236030
	I0718 21:11:10.045155    5402 fix.go:229] Guest: 2024-07-18 21:11:09.88623603 -0700 PDT Remote: 2024-07-18 21:11:09.992335 -0700 PDT m=+122.291588156 (delta=-106.09897ms)
	I0718 21:11:10.045169    5402 fix.go:200] guest clock delta is within tolerance: -106.09897ms
	I0718 21:11:10.045174    5402 start.go:83] releasing machines lock for "multinode-127000-m02", held for 37.414119601s
	I0718 21:11:10.045191    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:10.045325    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetIP
	I0718 21:11:10.069576    5402 out.go:177] * Found network options:
	I0718 21:11:10.089556    5402 out.go:177]   - NO_PROXY=192.169.0.17
	W0718 21:11:10.110680    5402 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 21:11:10.110717    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:10.111529    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:10.111754    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:10.111841    5402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 21:11:10.111879    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	W0718 21:11:10.111988    5402 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 21:11:10.112102    5402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 21:11:10.112121    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:10.112172    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:10.112255    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:10.112309    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:10.112405    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:10.112477    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:10.112582    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/id_rsa Username:docker}
	I0718 21:11:10.112596    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:10.112701    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/id_rsa Username:docker}
	I0718 21:11:10.141918    5402 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0718 21:11:10.142038    5402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 21:11:10.142104    5402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 21:11:10.188167    5402 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0718 21:11:10.188996    5402 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0718 21:11:10.189024    5402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 21:11:10.189039    5402 start.go:495] detecting cgroup driver to use...
	I0718 21:11:10.189152    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:11:10.204600    5402 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0718 21:11:10.204850    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 21:11:10.213829    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 21:11:10.222757    5402 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 21:11:10.222812    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 21:11:10.231817    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:11:10.240739    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 21:11:10.249684    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:11:10.258638    5402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 21:11:10.268013    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 21:11:10.276993    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 21:11:10.285910    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 21:11:10.295134    5402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 21:11:10.303190    5402 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0718 21:11:10.303340    5402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 21:11:10.311494    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:11:10.408459    5402 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 21:11:10.426650    5402 start.go:495] detecting cgroup driver to use...
	I0718 21:11:10.426721    5402 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 21:11:10.447527    5402 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0718 21:11:10.449131    5402 command_runner.go:130] > [Unit]
	I0718 21:11:10.449142    5402 command_runner.go:130] > Description=Docker Application Container Engine
	I0718 21:11:10.449154    5402 command_runner.go:130] > Documentation=https://docs.docker.com
	I0718 21:11:10.449162    5402 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0718 21:11:10.449169    5402 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0718 21:11:10.449173    5402 command_runner.go:130] > StartLimitBurst=3
	I0718 21:11:10.449177    5402 command_runner.go:130] > StartLimitIntervalSec=60
	I0718 21:11:10.449181    5402 command_runner.go:130] > [Service]
	I0718 21:11:10.449184    5402 command_runner.go:130] > Type=notify
	I0718 21:11:10.449188    5402 command_runner.go:130] > Restart=on-failure
	I0718 21:11:10.449191    5402 command_runner.go:130] > Environment=NO_PROXY=192.169.0.17
	I0718 21:11:10.449198    5402 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0718 21:11:10.449207    5402 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0718 21:11:10.449213    5402 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0718 21:11:10.449219    5402 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0718 21:11:10.449225    5402 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0718 21:11:10.449231    5402 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0718 21:11:10.449237    5402 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0718 21:11:10.449250    5402 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0718 21:11:10.449256    5402 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0718 21:11:10.449259    5402 command_runner.go:130] > ExecStart=
	I0718 21:11:10.449274    5402 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0718 21:11:10.449283    5402 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0718 21:11:10.449293    5402 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0718 21:11:10.449298    5402 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0718 21:11:10.449304    5402 command_runner.go:130] > LimitNOFILE=infinity
	I0718 21:11:10.449307    5402 command_runner.go:130] > LimitNPROC=infinity
	I0718 21:11:10.449311    5402 command_runner.go:130] > LimitCORE=infinity
	I0718 21:11:10.449315    5402 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0718 21:11:10.449321    5402 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0718 21:11:10.449325    5402 command_runner.go:130] > TasksMax=infinity
	I0718 21:11:10.449331    5402 command_runner.go:130] > TimeoutStartSec=0
	I0718 21:11:10.449337    5402 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0718 21:11:10.449340    5402 command_runner.go:130] > Delegate=yes
	I0718 21:11:10.449344    5402 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0718 21:11:10.449351    5402 command_runner.go:130] > KillMode=process
	I0718 21:11:10.449355    5402 command_runner.go:130] > [Install]
	I0718 21:11:10.449359    5402 command_runner.go:130] > WantedBy=multi-user.target
	I0718 21:11:10.449470    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:11:10.461265    5402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 21:11:10.482926    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:11:10.493342    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:11:10.509204    5402 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 21:11:10.530540    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:11:10.541206    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:11:10.556963    5402 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0718 21:11:10.557323    5402 ssh_runner.go:195] Run: which cri-dockerd
	I0718 21:11:10.560174    5402 command_runner.go:130] > /usr/bin/cri-dockerd
	I0718 21:11:10.560356    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 21:11:10.567648    5402 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 21:11:10.581315    5402 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 21:11:10.676062    5402 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 21:11:10.790035    5402 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 21:11:10.790064    5402 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 21:11:10.804358    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:11:10.894374    5402 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:12:11.742801    5402 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0718 21:12:11.742816    5402 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0718 21:12:11.742825    5402 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.846631847s)
	I0718 21:12:11.742882    5402 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0718 21:12:11.751825    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0718 21:12:11.751838    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.484482329Z" level=info msg="Starting up"
	I0718 21:12:11.751846    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485167981Z" level=info msg="containerd not running, starting managed containerd"
	I0718 21:12:11.751859    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485801332Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	I0718 21:12:11.751868    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.502013303Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0718 21:12:11.751878    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.516958766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0718 21:12:11.751894    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517038396Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0718 21:12:11.751903    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517155084Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0718 21:12:11.751914    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517197264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.751927    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517355966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.751940    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517457389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.751965    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517585479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.751975    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517625666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.751985    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517656936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.751995    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517688957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.752005    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517881945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.752015    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.518099672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.752029    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519645604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.752039    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519696927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.752060    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519828049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.752071    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519870517Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0718 21:12:11.752080    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520040351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0718 21:12:11.752088    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520089814Z" level=info msg="metadata content store policy set" policy=shared
	I0718 21:12:11.752097    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522003436Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0718 21:12:11.752107    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522064725Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0718 21:12:11.752115    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522104055Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0718 21:12:11.752126    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522136906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0718 21:12:11.752135    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522168404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0718 21:12:11.752143    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522233548Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0718 21:12:11.752152    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522448512Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0718 21:12:11.752163    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522530201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0718 21:12:11.752172    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522566421Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0718 21:12:11.752182    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522596662Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0718 21:12:11.752191    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522630885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752201    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522660955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752211    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522697431Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752226    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522732084Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752237    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522762824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752245    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522792209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752287    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522821157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752299    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522848962Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752312    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522945935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752320    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522982209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752329    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523011791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752338    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523044426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752347    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523073991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752356    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523102957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752365    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523131966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752373    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523160366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752381    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523189181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752392    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523228786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752401    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523261112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752409    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523289795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752418    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523320625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752427    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523355398Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0718 21:12:11.752435    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523391561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752444    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523421174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752453    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523448613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0718 21:12:11.752463    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523523187Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0718 21:12:11.752474    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523566449Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0718 21:12:11.752484    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523596740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0718 21:12:11.752638    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523625735Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0718 21:12:11.752651    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523653333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752660    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523681797Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0718 21:12:11.752668    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523709460Z" level=info msg="NRI interface is disabled by configuration."
	I0718 21:12:11.752677    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523910253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0718 21:12:11.752685    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523995611Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0718 21:12:11.752693    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524058018Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0718 21:12:11.752704    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524093051Z" level=info msg="containerd successfully booted in 0.022782s"
	I0718 21:12:11.752712    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.507162701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0718 21:12:11.752719    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.519725545Z" level=info msg="Loading containers: start."
	I0718 21:12:11.752739    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.625326434Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0718 21:12:11.752751    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.687949447Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0718 21:12:11.752763    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736564365Z" level=warning msg="error locating sandbox id aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df: sandbox aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df not found"
	I0718 21:12:11.752771    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736780148Z" level=info msg="Loading containers: done."
	I0718 21:12:11.752780    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744186186Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0718 21:12:11.752788    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744371679Z" level=info msg="Daemon has completed initialization"
	I0718 21:12:11.752796    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.766998398Z" level=info msg="API listen on /var/run/docker.sock"
	I0718 21:12:11.752803    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.767075299Z" level=info msg="API listen on [::]:2376"
	I0718 21:12:11.752808    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 systemd[1]: Started Docker Application Container Engine.
	I0718 21:12:11.752815    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.768722806Z" level=info msg="Processing signal 'terminated'"
	I0718 21:12:11.752825    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.769547405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0718 21:12:11.752833    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0718 21:12:11.752841    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770121951Z" level=info msg="Daemon shutdown complete"
	I0718 21:12:11.752879    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770184908Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0718 21:12:11.752888    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770198671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0718 21:12:11.752896    5402 command_runner.go:130] > Jul 19 04:11:11 multinode-127000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0718 21:12:11.752902    5402 command_runner.go:130] > Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0718 21:12:11.752908    5402 command_runner.go:130] > Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0718 21:12:11.752914    5402 command_runner.go:130] > Jul 19 04:11:11 multinode-127000-m02 dockerd[847]: time="2024-07-19T04:11:11.807768811Z" level=info msg="Starting up"
	I0718 21:12:11.752923    5402 command_runner.go:130] > Jul 19 04:12:11 multinode-127000-m02 dockerd[847]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0718 21:12:11.752931    5402 command_runner.go:130] > Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0718 21:12:11.752938    5402 command_runner.go:130] > Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0718 21:12:11.752943    5402 command_runner.go:130] > Jul 19 04:12:11 multinode-127000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0718 21:12:11.777591    5402 out.go:177] 
	W0718 21:12:11.798090    5402 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 04:11:08 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.484482329Z" level=info msg="Starting up"
	Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485167981Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485801332Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.502013303Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.516958766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517038396Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517155084Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517197264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517355966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517457389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517585479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517625666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517656936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517688957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517881945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.518099672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519645604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519696927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519828049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519870517Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520040351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520089814Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522003436Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522064725Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522104055Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522136906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522168404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522233548Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522448512Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522530201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522566421Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522596662Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522630885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522660955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522697431Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522732084Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522762824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522792209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522821157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522848962Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522945935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522982209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523011791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523044426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523073991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523102957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523131966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523160366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523189181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523228786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523261112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523289795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523320625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523355398Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523391561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523421174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523448613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523523187Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523566449Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523596740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523625735Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523653333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523681797Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523709460Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523910253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523995611Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524058018Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524093051Z" level=info msg="containerd successfully booted in 0.022782s"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.507162701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.519725545Z" level=info msg="Loading containers: start."
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.625326434Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.687949447Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736564365Z" level=warning msg="error locating sandbox id aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df: sandbox aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df not found"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736780148Z" level=info msg="Loading containers: done."
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744186186Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744371679Z" level=info msg="Daemon has completed initialization"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.766998398Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.767075299Z" level=info msg="API listen on [::]:2376"
	Jul 19 04:11:09 multinode-127000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.768722806Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.769547405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 04:11:10 multinode-127000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770121951Z" level=info msg="Daemon shutdown complete"
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770184908Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770198671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 04:11:11 multinode-127000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:11 multinode-127000-m02 dockerd[847]: time="2024-07-19T04:11:11.807768811Z" level=info msg="Starting up"
	Jul 19 04:12:11 multinode-127000-m02 dockerd[847]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:12:11 multinode-127000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 04:11:08 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.484482329Z" level=info msg="Starting up"
	Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485167981Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485801332Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.502013303Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.516958766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517038396Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517155084Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517197264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517355966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517457389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517585479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517625666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517656936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517688957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517881945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.518099672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519645604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519696927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519828049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519870517Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520040351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520089814Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522003436Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522064725Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522104055Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522136906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522168404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522233548Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522448512Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522530201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522566421Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522596662Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522630885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522660955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522697431Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522732084Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522762824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522792209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522821157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522848962Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522945935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522982209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523011791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523044426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523073991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523102957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523131966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523160366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523189181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523228786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523261112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523289795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523320625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523355398Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523391561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523421174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523448613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523523187Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523566449Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523596740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523625735Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523653333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523681797Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523709460Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523910253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523995611Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524058018Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524093051Z" level=info msg="containerd successfully booted in 0.022782s"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.507162701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.519725545Z" level=info msg="Loading containers: start."
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.625326434Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.687949447Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736564365Z" level=warning msg="error locating sandbox id aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df: sandbox aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df not found"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736780148Z" level=info msg="Loading containers: done."
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744186186Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744371679Z" level=info msg="Daemon has completed initialization"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.766998398Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.767075299Z" level=info msg="API listen on [::]:2376"
	Jul 19 04:11:09 multinode-127000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.768722806Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.769547405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 04:11:10 multinode-127000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770121951Z" level=info msg="Daemon shutdown complete"
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770184908Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770198671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 04:11:11 multinode-127000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:11 multinode-127000-m02 dockerd[847]: time="2024-07-19T04:11:11.807768811Z" level=info msg="Starting up"
	Jul 19 04:12:11 multinode-127000-m02 dockerd[847]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:12:11 multinode-127000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0718 21:12:11.798193    5402 out.go:239] * 
	* 
	W0718 21:12:11.799426    5402 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:12:11.861206    5402 out.go:177] 

                                                
                                                
** /stderr **
multinode_test.go:378: failed to start cluster. args "out/minikube-darwin-amd64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p multinode-127000 -n multinode-127000
helpers_test.go:244: <<< TestMultiNode/serial/RestartMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p multinode-127000 logs -n 25: (2.734329021s)
helpers_test.go:252: TestMultiNode/serial/RestartMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                                            Args                                                             |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| cp      | multinode-127000 cp multinode-127000-m02:/home/docker/cp-test.txt                                                           | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000:/home/docker/cp-test_multinode-127000-m02_multinode-127000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n                                                                                                     | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n multinode-127000 sudo cat                                                                           | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | /home/docker/cp-test_multinode-127000-m02_multinode-127000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-127000 cp multinode-127000-m02:/home/docker/cp-test.txt                                                           | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000-m03:/home/docker/cp-test_multinode-127000-m02_multinode-127000-m03.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n                                                                                                     | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000-m02 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n multinode-127000-m03 sudo cat                                                                       | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | /home/docker/cp-test_multinode-127000-m02_multinode-127000-m03.txt                                                          |                  |         |         |                     |                     |
	| cp      | multinode-127000 cp testdata/cp-test.txt                                                                                    | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000-m03:/home/docker/cp-test.txt                                                                               |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n                                                                                                     | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-127000 cp multinode-127000-m03:/home/docker/cp-test.txt                                                           | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile4019457516/001/cp-test_multinode-127000-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n                                                                                                     | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| cp      | multinode-127000 cp multinode-127000-m03:/home/docker/cp-test.txt                                                           | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000:/home/docker/cp-test_multinode-127000-m03_multinode-127000.txt                                             |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n                                                                                                     | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n multinode-127000 sudo cat                                                                           | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | /home/docker/cp-test_multinode-127000-m03_multinode-127000.txt                                                              |                  |         |         |                     |                     |
	| cp      | multinode-127000 cp multinode-127000-m03:/home/docker/cp-test.txt                                                           | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000-m02:/home/docker/cp-test_multinode-127000-m03_multinode-127000-m02.txt                                     |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n                                                                                                     | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | multinode-127000-m03 sudo cat                                                                                               |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                    |                  |         |         |                     |                     |
	| ssh     | multinode-127000 ssh -n multinode-127000-m02 sudo cat                                                                       | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | /home/docker/cp-test_multinode-127000-m03_multinode-127000-m02.txt                                                          |                  |         |         |                     |                     |
	| node    | multinode-127000 node stop m03                                                                                              | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	| node    | multinode-127000 node start                                                                                                 | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:05 PDT |
	|         | m03 -v=7 --alsologtostderr                                                                                                  |                  |         |         |                     |                     |
	| node    | list -p multinode-127000                                                                                                    | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT |                     |
	| stop    | -p multinode-127000                                                                                                         | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:05 PDT | 18 Jul 24 21:06 PDT |
	| start   | -p multinode-127000                                                                                                         | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:06 PDT | 18 Jul 24 21:08 PDT |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	| node    | list -p multinode-127000                                                                                                    | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:08 PDT |                     |
	| node    | multinode-127000 node delete                                                                                                | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:08 PDT | 18 Jul 24 21:08 PDT |
	|         | m03                                                                                                                         |                  |         |         |                     |                     |
	| stop    | multinode-127000 stop                                                                                                       | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:08 PDT | 18 Jul 24 21:09 PDT |
	| start   | -p multinode-127000                                                                                                         | multinode-127000 | jenkins | v1.33.1 | 18 Jul 24 21:09 PDT |                     |
	|         | --wait=true -v=8                                                                                                            |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                           |                  |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                           |                  |         |         |                     |                     |
	|---------|-----------------------------------------------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 21:09:07
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 21:09:07.732715    5402 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:07.732921    5402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:07.732927    5402 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:07.732930    5402 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:07.733092    5402 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 21:09:07.734586    5402 out.go:298] Setting JSON to false
	I0718 21:09:07.756885    5402 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":4121,"bootTime":1721358026,"procs":442,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0718 21:09:07.756981    5402 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:09:07.779533    5402 out.go:177] * [multinode-127000] minikube v1.33.1 on Darwin 14.5
	I0718 21:09:07.821838    5402 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:09:07.821863    5402 notify.go:220] Checking for updates...
	I0718 21:09:07.864576    5402 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 21:09:07.885883    5402 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 21:09:07.908876    5402 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:09:07.929805    5402 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	I0718 21:09:07.951053    5402 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:09:07.972764    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:07.973450    5402 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:07.973523    5402 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:09:07.983201    5402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53419
	I0718 21:09:07.983736    5402 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:09:07.984263    5402 main.go:141] libmachine: Using API Version  1
	I0718 21:09:07.984272    5402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:09:07.984589    5402 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:09:07.984776    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:07.984989    5402 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:09:07.985254    5402 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:07.985277    5402 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:09:07.993975    5402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53421
	I0718 21:09:07.994365    5402 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:09:07.994783    5402 main.go:141] libmachine: Using API Version  1
	I0718 21:09:07.994824    5402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:09:07.995030    5402 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:09:07.995222    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:08.023599    5402 out.go:177] * Using the hyperkit driver based on existing profile
	I0718 21:09:08.065849    5402 start.go:297] selected driver: hyperkit
	I0718 21:09:08.065899    5402 start.go:901] validating driver "hyperkit" against &{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false k
ubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: Socket
VMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:09:08.066121    5402 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:09:08.066321    5402 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:09:08.066519    5402 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1411/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0718 21:09:08.075964    5402 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0718 21:09:08.080379    5402 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:08.080402    5402 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0718 21:09:08.083236    5402 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:09:08.083295    5402 cni.go:84] Creating CNI manager for ""
	I0718 21:09:08.083305    5402 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0718 21:09:08.083377    5402 start.go:340] cluster config:
	{Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServ
erHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plu
gin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:09:08.083485    5402 iso.go:125] acquiring lock: {Name:mka3a56e9fb30ac1fad44235cb5c998fd919cd8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:09:08.127919    5402 out.go:177] * Starting "multinode-127000" primary control-plane node in "multinode-127000" cluster
	I0718 21:09:08.149869    5402 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:09:08.149938    5402 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 21:09:08.149965    5402 cache.go:56] Caching tarball of preloaded images
	I0718 21:09:08.150175    5402 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:09:08.150197    5402 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:09:08.150374    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:09:08.151214    5402 start.go:360] acquireMachinesLock for multinode-127000: {Name:mk8a0ac4b11cd5d9eba5ac8b9ae33317742f9112 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:09:08.151330    5402 start.go:364] duration metric: took 93.269µs to acquireMachinesLock for "multinode-127000"
	I0718 21:09:08.151385    5402 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:09:08.151406    5402 fix.go:54] fixHost starting: 
	I0718 21:09:08.151801    5402 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:08.151863    5402 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:09:08.161189    5402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53423
	I0718 21:09:08.161603    5402 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:09:08.161963    5402 main.go:141] libmachine: Using API Version  1
	I0718 21:09:08.161979    5402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:09:08.162222    5402 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:09:08.162354    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:08.162457    5402 main.go:141] libmachine: (multinode-127000) Calling .GetState
	I0718 21:09:08.162545    5402 main.go:141] libmachine: (multinode-127000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:09:08.162617    5402 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid from json: 5329
	I0718 21:09:08.163559    5402 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid 5329 missing from process table
	I0718 21:09:08.163604    5402 fix.go:112] recreateIfNeeded on multinode-127000: state=Stopped err=<nil>
	I0718 21:09:08.163621    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	W0718 21:09:08.163709    5402 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:09:08.205879    5402 out.go:177] * Restarting existing hyperkit VM for "multinode-127000" ...
	I0718 21:09:08.228986    5402 main.go:141] libmachine: (multinode-127000) Calling .Start
	I0718 21:09:08.229275    5402 main.go:141] libmachine: (multinode-127000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:09:08.229342    5402 main.go:141] libmachine: (multinode-127000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/hyperkit.pid
	I0718 21:09:08.231130    5402 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid 5329 missing from process table
	I0718 21:09:08.231149    5402 main.go:141] libmachine: (multinode-127000) DBG | pid 5329 is in state "Stopped"
	I0718 21:09:08.231169    5402 main.go:141] libmachine: (multinode-127000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/hyperkit.pid...
	I0718 21:09:08.231357    5402 main.go:141] libmachine: (multinode-127000) DBG | Using UUID 2170d403-7108-4d79-a7e1-5094631d4682
	I0718 21:09:08.344896    5402 main.go:141] libmachine: (multinode-127000) DBG | Generated MAC d2:e2:11:67:74:1c
	I0718 21:09:08.344923    5402 main.go:141] libmachine: (multinode-127000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000
	I0718 21:09:08.345052    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2170d403-7108-4d79-a7e1-5094631d4682", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bcc60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0718 21:09:08.345084    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"2170d403-7108-4d79-a7e1-5094631d4682", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bcc60)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Proce
ss)(nil)}
	I0718 21:09:08.345131    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "2170d403-7108-4d79-a7e1-5094631d4682", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/multinode-127000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/bzimage,/Users/jenkins/minikube-integration/1930
2-1411/.minikube/machines/multinode-127000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000"}
	I0718 21:09:08.345176    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 2170d403-7108-4d79-a7e1-5094631d4682 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/multinode-127000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/console-ring -f kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/bzimage,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/initrd,earlyprintk=
serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000"
	I0718 21:09:08.345194    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0718 21:09:08.346583    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 DEBUG: hyperkit: Pid is 5415
	I0718 21:09:08.346921    5402 main.go:141] libmachine: (multinode-127000) DBG | Attempt 0
	I0718 21:09:08.346933    5402 main.go:141] libmachine: (multinode-127000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:09:08.346983    5402 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid from json: 5415
	I0718 21:09:08.348667    5402 main.go:141] libmachine: (multinode-127000) DBG | Searching for d2:e2:11:67:74:1c in /var/db/dhcpd_leases ...
	I0718 21:09:08.348752    5402 main.go:141] libmachine: (multinode-127000) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0718 21:09:08.348779    5402 main.go:141] libmachine: (multinode-127000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:4c:de:4f:d8:27 ID:1,6:4c:de:4f:d8:27 Lease:0x6699e6d1}
	I0718 21:09:08.348790    5402 main.go:141] libmachine: (multinode-127000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:d2:59:42:45:2c ID:1,3a:d2:59:42:45:2c Lease:0x669b37f6}
	I0718 21:09:08.348820    5402 main.go:141] libmachine: (multinode-127000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:d2:e2:11:67:74:1c ID:1,d2:e2:11:67:74:1c Lease:0x669b37bb}
	I0718 21:09:08.348836    5402 main.go:141] libmachine: (multinode-127000) DBG | Found match: d2:e2:11:67:74:1c
	I0718 21:09:08.348849    5402 main.go:141] libmachine: (multinode-127000) DBG | IP: 192.169.0.17
	I0718 21:09:08.348880    5402 main.go:141] libmachine: (multinode-127000) Calling .GetConfigRaw
	I0718 21:09:08.349504    5402 main.go:141] libmachine: (multinode-127000) Calling .GetIP
	I0718 21:09:08.349706    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:09:08.350127    5402 machine.go:94] provisionDockerMachine start ...
	I0718 21:09:08.350136    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:08.350259    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:08.350365    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:08.350483    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:08.350628    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:08.350765    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:08.350926    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:08.351138    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:08.351147    5402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 21:09:08.355168    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0718 21:09:08.408798    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0718 21:09:08.409496    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:09:08.409516    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:09:08.409528    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:09:08.409535    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:09:08.789011    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0718 21:09:08.789027    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0718 21:09:08.903629    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:09:08.903646    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:09:08.903672    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:09:08.903690    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:09:08.904533    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0718 21:09:08.904545    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:08 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0718 21:09:14.174877    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:14 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0718 21:09:14.175009    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:14 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0718 21:09:14.175020    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:14 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0718 21:09:14.198773    5402 main.go:141] libmachine: (multinode-127000) DBG | 2024/07/18 21:09:14 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0718 21:09:43.418102    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 21:09:43.418117    5402 main.go:141] libmachine: (multinode-127000) Calling .GetMachineName
	I0718 21:09:43.418258    5402 buildroot.go:166] provisioning hostname "multinode-127000"
	I0718 21:09:43.418270    5402 main.go:141] libmachine: (multinode-127000) Calling .GetMachineName
	I0718 21:09:43.418369    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.418470    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.418556    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.418655    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.418767    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.418894    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.419109    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.419118    5402 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-127000 && echo "multinode-127000" | sudo tee /etc/hostname
	I0718 21:09:43.482187    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-127000
	
	I0718 21:09:43.482205    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.482350    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.482456    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.482541    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.482641    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.482771    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.482924    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.482937    5402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-127000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-127000/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-127000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 21:09:43.540730    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:09:43.540752    5402 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1411/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1411/.minikube}
	I0718 21:09:43.540774    5402 buildroot.go:174] setting up certificates
	I0718 21:09:43.540782    5402 provision.go:84] configureAuth start
	I0718 21:09:43.540790    5402 main.go:141] libmachine: (multinode-127000) Calling .GetMachineName
	I0718 21:09:43.540933    5402 main.go:141] libmachine: (multinode-127000) Calling .GetIP
	I0718 21:09:43.541030    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.541125    5402 provision.go:143] copyHostCerts
	I0718 21:09:43.541157    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem
	I0718 21:09:43.541228    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem, removing ...
	I0718 21:09:43.541237    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem
	I0718 21:09:43.541397    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem (1082 bytes)
	I0718 21:09:43.541623    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem
	I0718 21:09:43.541669    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem, removing ...
	I0718 21:09:43.541674    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem
	I0718 21:09:43.541839    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem (1123 bytes)
	I0718 21:09:43.542026    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem
	I0718 21:09:43.542071    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem, removing ...
	I0718 21:09:43.542077    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem
	I0718 21:09:43.542168    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem (1675 bytes)
	I0718 21:09:43.542315    5402 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem org=jenkins.multinode-127000 san=[127.0.0.1 192.169.0.17 localhost minikube multinode-127000]
	I0718 21:09:43.620132    5402 provision.go:177] copyRemoteCerts
	I0718 21:09:43.620181    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 21:09:43.620198    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.620323    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.620415    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.620506    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.620604    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/id_rsa Username:docker}
	I0718 21:09:43.654508    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 21:09:43.654583    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 21:09:43.674626    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 21:09:43.674687    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0718 21:09:43.694129    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 21:09:43.694187    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0718 21:09:43.713871    5402 provision.go:87] duration metric: took 173.061203ms to configureAuth
	I0718 21:09:43.713883    5402 buildroot.go:189] setting minikube options for container-runtime
	I0718 21:09:43.714045    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:43.714059    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:43.714199    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.714288    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.714371    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.714466    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.714549    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.714663    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.714814    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.714822    5402 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 21:09:43.767743    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 21:09:43.767754    5402 buildroot.go:70] root file system type: tmpfs
	I0718 21:09:43.767831    5402 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 21:09:43.767846    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.767976    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.768060    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.768152    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.768246    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.768391    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.768536    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.768581    5402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 21:09:43.833695    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 21:09:43.833722    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:43.833864    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:43.833950    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.834031    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:43.834130    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:43.834249    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:43.834390    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:43.834403    5402 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 21:09:45.516084    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 21:09:45.516103    5402 machine.go:97] duration metric: took 37.164865517s to provisionDockerMachine
	I0718 21:09:45.516116    5402 start.go:293] postStartSetup for "multinode-127000" (driver="hyperkit")
	I0718 21:09:45.516123    5402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 21:09:45.516135    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.516307    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 21:09:45.516321    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:45.516419    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:45.516513    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.516597    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:45.516676    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/id_rsa Username:docker}
	I0718 21:09:45.550620    5402 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 21:09:45.553761    5402 command_runner.go:130] > NAME=Buildroot
	I0718 21:09:45.553770    5402 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0718 21:09:45.553774    5402 command_runner.go:130] > ID=buildroot
	I0718 21:09:45.553777    5402 command_runner.go:130] > VERSION_ID=2023.02.9
	I0718 21:09:45.553781    5402 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0718 21:09:45.553876    5402 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 21:09:45.553886    5402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/addons for local assets ...
	I0718 21:09:45.553983    5402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/files for local assets ...
	I0718 21:09:45.554167    5402 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> 19482.pem in /etc/ssl/certs
	I0718 21:09:45.554173    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> /etc/ssl/certs/19482.pem
	I0718 21:09:45.554390    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 21:09:45.561542    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem --> /etc/ssl/certs/19482.pem (1708 bytes)
	I0718 21:09:45.581669    5402 start.go:296] duration metric: took 65.543633ms for postStartSetup
	I0718 21:09:45.581690    5402 fix.go:56] duration metric: took 37.429182854s for fixHost
	I0718 21:09:45.581703    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:45.581843    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:45.581945    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.582055    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.582146    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:45.582254    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:09:45.582386    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.17 22 <nil> <nil>}
	I0718 21:09:45.582393    5402 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 21:09:45.632928    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362185.710404724
	
	I0718 21:09:45.632949    5402 fix.go:216] guest clock: 1721362185.710404724
	I0718 21:09:45.632955    5402 fix.go:229] Guest: 2024-07-18 21:09:45.710404724 -0700 PDT Remote: 2024-07-18 21:09:45.581693 -0700 PDT m=+37.883450204 (delta=128.711724ms)
	I0718 21:09:45.632975    5402 fix.go:200] guest clock delta is within tolerance: 128.711724ms
	I0718 21:09:45.632978    5402 start.go:83] releasing machines lock for "multinode-127000", held for 37.480526357s
	I0718 21:09:45.632998    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.633133    5402 main.go:141] libmachine: (multinode-127000) Calling .GetIP
	I0718 21:09:45.633238    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.633575    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.633677    5402 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:09:45.633764    5402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 21:09:45.633803    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:45.633838    5402 ssh_runner.go:195] Run: cat /version.json
	I0718 21:09:45.633849    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:09:45.633914    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:45.633939    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:09:45.634011    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.634039    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:09:45.634109    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:45.634134    5402 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:09:45.634203    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/id_rsa Username:docker}
	I0718 21:09:45.634230    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/id_rsa Username:docker}
	I0718 21:09:45.709482    5402 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0718 21:09:45.710315    5402 command_runner.go:130] > {"iso_version": "v1.33.1-1721324531-19298", "kicbase_version": "v0.0.44-1721234491-19282", "minikube_version": "v1.33.1", "commit": "0e13329c5f674facda20b63833c6d01811d249dd"}
	I0718 21:09:45.710510    5402 ssh_runner.go:195] Run: systemctl --version
	I0718 21:09:45.715584    5402 command_runner.go:130] > systemd 252 (252)
	I0718 21:09:45.715600    5402 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0718 21:09:45.715795    5402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 21:09:45.720069    5402 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0718 21:09:45.720100    5402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 21:09:45.720136    5402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 21:09:45.732422    5402 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0718 21:09:45.732459    5402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 21:09:45.732466    5402 start.go:495] detecting cgroup driver to use...
	I0718 21:09:45.732558    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:09:45.747185    5402 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0718 21:09:45.747458    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 21:09:45.756153    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 21:09:45.764901    5402 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 21:09:45.764942    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 21:09:45.773526    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:09:45.782247    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 21:09:45.790817    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:09:45.799374    5402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 21:09:45.808402    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 21:09:45.817051    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 21:09:45.825770    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 21:09:45.834349    5402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 21:09:45.842207    5402 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0718 21:09:45.842377    5402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 21:09:45.850300    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:45.950029    5402 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 21:09:45.969523    5402 start.go:495] detecting cgroup driver to use...
	I0718 21:09:45.969602    5402 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 21:09:45.987662    5402 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0718 21:09:45.988224    5402 command_runner.go:130] > [Unit]
	I0718 21:09:45.988244    5402 command_runner.go:130] > Description=Docker Application Container Engine
	I0718 21:09:45.988264    5402 command_runner.go:130] > Documentation=https://docs.docker.com
	I0718 21:09:45.988275    5402 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0718 21:09:45.988280    5402 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0718 21:09:45.988284    5402 command_runner.go:130] > StartLimitBurst=3
	I0718 21:09:45.988288    5402 command_runner.go:130] > StartLimitIntervalSec=60
	I0718 21:09:45.988291    5402 command_runner.go:130] > [Service]
	I0718 21:09:45.988295    5402 command_runner.go:130] > Type=notify
	I0718 21:09:45.988298    5402 command_runner.go:130] > Restart=on-failure
	I0718 21:09:45.988305    5402 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0718 21:09:45.988319    5402 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0718 21:09:45.988326    5402 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0718 21:09:45.988331    5402 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0718 21:09:45.988337    5402 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0718 21:09:45.988345    5402 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0718 21:09:45.988352    5402 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0718 21:09:45.988360    5402 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0718 21:09:45.988366    5402 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0718 21:09:45.988372    5402 command_runner.go:130] > ExecStart=
	I0718 21:09:45.988386    5402 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0718 21:09:45.988390    5402 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0718 21:09:45.988397    5402 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0718 21:09:45.988403    5402 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0718 21:09:45.988406    5402 command_runner.go:130] > LimitNOFILE=infinity
	I0718 21:09:45.988411    5402 command_runner.go:130] > LimitNPROC=infinity
	I0718 21:09:45.988425    5402 command_runner.go:130] > LimitCORE=infinity
	I0718 21:09:45.988433    5402 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0718 21:09:45.988437    5402 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0718 21:09:45.988441    5402 command_runner.go:130] > TasksMax=infinity
	I0718 21:09:45.988445    5402 command_runner.go:130] > TimeoutStartSec=0
	I0718 21:09:45.988450    5402 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0718 21:09:45.988454    5402 command_runner.go:130] > Delegate=yes
	I0718 21:09:45.988459    5402 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0718 21:09:45.988464    5402 command_runner.go:130] > KillMode=process
	I0718 21:09:45.988467    5402 command_runner.go:130] > [Install]
	I0718 21:09:45.988481    5402 command_runner.go:130] > WantedBy=multi-user.target
	I0718 21:09:45.988559    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:09:46.000036    5402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 21:09:46.013593    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:09:46.024209    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:09:46.034558    5402 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 21:09:46.056896    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:09:46.067341    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:09:46.082374    5402 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0718 21:09:46.082726    5402 ssh_runner.go:195] Run: which cri-dockerd
	I0718 21:09:46.085565    5402 command_runner.go:130] > /usr/bin/cri-dockerd
	I0718 21:09:46.085708    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 21:09:46.093034    5402 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 21:09:46.106748    5402 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 21:09:46.201556    5402 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 21:09:46.317115    5402 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 21:09:46.317180    5402 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 21:09:46.332170    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:46.431120    5402 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:09:48.791524    5402 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.360305433s)
	I0718 21:09:48.791584    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0718 21:09:48.801762    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:09:48.811580    5402 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0718 21:09:48.904862    5402 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0718 21:09:49.019139    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:49.131102    5402 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0718 21:09:49.144595    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0718 21:09:49.155003    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:49.248782    5402 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0718 21:09:49.313017    5402 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0718 21:09:49.313094    5402 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0718 21:09:49.317280    5402 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0718 21:09:49.317291    5402 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0718 21:09:49.317296    5402 command_runner.go:130] > Device: 0,22	Inode: 760         Links: 1
	I0718 21:09:49.317301    5402 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0718 21:09:49.317305    5402 command_runner.go:130] > Access: 2024-07-19 04:09:49.339736965 +0000
	I0718 21:09:49.317309    5402 command_runner.go:130] > Modify: 2024-07-19 04:09:49.339736965 +0000
	I0718 21:09:49.317313    5402 command_runner.go:130] > Change: 2024-07-19 04:09:49.341736965 +0000
	I0718 21:09:49.317317    5402 command_runner.go:130] >  Birth: -
	I0718 21:09:49.317499    5402 start.go:563] Will wait 60s for crictl version
	I0718 21:09:49.317556    5402 ssh_runner.go:195] Run: which crictl
	I0718 21:09:49.320486    5402 command_runner.go:130] > /usr/bin/crictl
	I0718 21:09:49.320654    5402 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0718 21:09:49.347451    5402 command_runner.go:130] > Version:  0.1.0
	I0718 21:09:49.347463    5402 command_runner.go:130] > RuntimeName:  docker
	I0718 21:09:49.347467    5402 command_runner.go:130] > RuntimeVersion:  27.0.3
	I0718 21:09:49.347471    5402 command_runner.go:130] > RuntimeApiVersion:  v1
	I0718 21:09:49.348520    5402 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0718 21:09:49.348590    5402 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:09:49.365113    5402 command_runner.go:130] > 27.0.3
	I0718 21:09:49.366052    5402 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0718 21:09:49.383647    5402 command_runner.go:130] > 27.0.3
	I0718 21:09:49.426196    5402 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0718 21:09:49.426245    5402 main.go:141] libmachine: (multinode-127000) Calling .GetIP
	I0718 21:09:49.426631    5402 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0718 21:09:49.431146    5402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 21:09:49.440685    5402 kubeadm.go:883] updating cluster {Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion
:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logvi
ewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP
: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0718 21:09:49.440771    5402 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:09:49.440834    5402 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:09:49.452806    5402 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0718 21:09:49.452819    5402 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0718 21:09:49.452824    5402 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0718 21:09:49.452840    5402 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0718 21:09:49.452845    5402 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0718 21:09:49.452849    5402 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0718 21:09:49.452856    5402 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0718 21:09:49.452860    5402 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0718 21:09:49.452864    5402 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:09:49.452869    5402 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0718 21:09:49.453467    5402 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0718 21:09:49.453475    5402 docker.go:615] Images already preloaded, skipping extraction
	I0718 21:09:49.453541    5402 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0718 21:09:49.466729    5402 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.30.3
	I0718 21:09:49.466741    5402 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.30.3
	I0718 21:09:49.466746    5402 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.30.3
	I0718 21:09:49.466750    5402 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.30.3
	I0718 21:09:49.466754    5402 command_runner.go:130] > kindest/kindnetd:v20240715-585640e9
	I0718 21:09:49.466757    5402 command_runner.go:130] > registry.k8s.io/etcd:3.5.12-0
	I0718 21:09:49.466762    5402 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.1
	I0718 21:09:49.466766    5402 command_runner.go:130] > registry.k8s.io/pause:3.9
	I0718 21:09:49.466770    5402 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0718 21:09:49.466774    5402 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0718 21:09:49.466808    5402 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	kindest/kindnetd:v20240715-585640e9
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0718 21:09:49.466823    5402 cache_images.go:84] Images are preloaded, skipping loading
	I0718 21:09:49.466833    5402 kubeadm.go:934] updating node { 192.169.0.17 8443 v1.30.3 docker true true} ...
	I0718 21:09:49.466921    5402 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-127000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.17
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0718 21:09:49.466989    5402 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0718 21:09:49.486123    5402 command_runner.go:130] > cgroupfs
	I0718 21:09:49.487120    5402 cni.go:84] Creating CNI manager for ""
	I0718 21:09:49.487129    5402 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0718 21:09:49.487140    5402 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0718 21:09:49.487156    5402 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.17 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-127000 NodeName:multinode-127000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.17"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.17 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0718 21:09:49.487241    5402 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.17
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-127000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.17
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.17"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0718 21:09:49.487301    5402 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0718 21:09:49.494732    5402 command_runner.go:130] > kubeadm
	I0718 21:09:49.494740    5402 command_runner.go:130] > kubectl
	I0718 21:09:49.494744    5402 command_runner.go:130] > kubelet
	I0718 21:09:49.494799    5402 binaries.go:44] Found k8s binaries, skipping transfer
	I0718 21:09:49.494840    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0718 21:09:49.502358    5402 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0718 21:09:49.516286    5402 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0718 21:09:49.529475    5402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2158 bytes)
	I0718 21:09:49.543040    5402 ssh_runner.go:195] Run: grep 192.169.0.17	control-plane.minikube.internal$ /etc/hosts
	I0718 21:09:49.545881    5402 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.17	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0718 21:09:49.554943    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:09:49.644775    5402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:09:49.660239    5402 certs.go:68] Setting up /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000 for IP: 192.169.0.17
	I0718 21:09:49.660251    5402 certs.go:194] generating shared ca certs ...
	I0718 21:09:49.660273    5402 certs.go:226] acquiring lock for ca certs: {Name:mka1585510108908e8b36055df3736f0521555f2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:09:49.660467    5402 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.key
	I0718 21:09:49.660547    5402 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.key
	I0718 21:09:49.660557    5402 certs.go:256] generating profile certs ...
	I0718 21:09:49.660678    5402 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/client.key
	I0718 21:09:49.660759    5402 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.key.b7156be1
	I0718 21:09:49.660831    5402 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.key
	I0718 21:09:49.660838    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0718 21:09:49.660859    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0718 21:09:49.660877    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0718 21:09:49.660898    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0718 21:09:49.660916    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0718 21:09:49.660945    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0718 21:09:49.660976    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0718 21:09:49.660995    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0718 21:09:49.661097    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/1948.pem (1338 bytes)
	W0718 21:09:49.661144    5402 certs.go:480] ignoring /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/1948_empty.pem, impossibly tiny 0 bytes
	I0718 21:09:49.661153    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem (1679 bytes)
	I0718 21:09:49.661188    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem (1082 bytes)
	I0718 21:09:49.661223    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem (1123 bytes)
	I0718 21:09:49.661254    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem (1675 bytes)
	I0718 21:09:49.661322    5402 certs.go:484] found cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem (1708 bytes)
	I0718 21:09:49.661354    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.661375    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.661393    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/1948.pem -> /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.661859    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0718 21:09:49.696993    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0718 21:09:49.720296    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0718 21:09:49.748941    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0718 21:09:49.772048    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0718 21:09:49.791992    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0718 21:09:49.811717    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0718 21:09:49.831177    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0718 21:09:49.850401    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem --> /usr/share/ca-certificates/19482.pem (1708 bytes)
	I0718 21:09:49.869625    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0718 21:09:49.888390    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/1948.pem --> /usr/share/ca-certificates/1948.pem (1338 bytes)
	I0718 21:09:49.907734    5402 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0718 21:09:49.921233    5402 ssh_runner.go:195] Run: openssl version
	I0718 21:09:49.925446    5402 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0718 21:09:49.925636    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0718 21:09:49.934756    5402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.938109    5402 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Jul 19 03:28 /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.938268    5402 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 19 03:28 /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.938317    5402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0718 21:09:49.942407    5402 command_runner.go:130] > b5213941
	I0718 21:09:49.942629    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0718 21:09:49.951988    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1948.pem && ln -fs /usr/share/ca-certificates/1948.pem /etc/ssl/certs/1948.pem"
	I0718 21:09:49.961221    5402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.964542    5402 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Jul 19 03:36 /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.964640    5402 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 19 03:36 /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.964680    5402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1948.pem
	I0718 21:09:49.968968    5402 command_runner.go:130] > 51391683
	I0718 21:09:49.969144    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1948.pem /etc/ssl/certs/51391683.0"
	I0718 21:09:49.978200    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/19482.pem && ln -fs /usr/share/ca-certificates/19482.pem /etc/ssl/certs/19482.pem"
	I0718 21:09:49.987317    5402 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.990507    5402 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Jul 19 03:36 /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.990626    5402 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 19 03:36 /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.990659    5402 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/19482.pem
	I0718 21:09:49.994688    5402 command_runner.go:130] > 3ec20f2e
	I0718 21:09:49.994832    5402 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/19482.pem /etc/ssl/certs/3ec20f2e.0"
	I0718 21:09:50.003995    5402 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 21:09:50.007252    5402 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0718 21:09:50.007264    5402 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0718 21:09:50.007271    5402 command_runner.go:130] > Device: 253,1	Inode: 6290248     Links: 1
	I0718 21:09:50.007279    5402 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 21:09:50.007287    5402 command_runner.go:130] > Access: 2024-07-19 04:06:29.123168803 +0000
	I0718 21:09:50.007294    5402 command_runner.go:130] > Modify: 2024-07-19 04:02:41.815943936 +0000
	I0718 21:09:50.007302    5402 command_runner.go:130] > Change: 2024-07-19 04:02:41.815943936 +0000
	I0718 21:09:50.007307    5402 command_runner.go:130] >  Birth: 2024-07-19 04:02:41.815943936 +0000
	I0718 21:09:50.007412    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0718 21:09:50.011504    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.011687    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0718 21:09:50.015766    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.015962    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0718 21:09:50.020143    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.020296    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0718 21:09:50.024353    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.024533    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0718 21:09:50.028589    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.028732    5402 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0718 21:09:50.032988    5402 command_runner.go:130] > Certificate will not expire
	I0718 21:09:50.033032    5402 kubeadm.go:392] StartCluster: {Name:multinode-127000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.30.3 ClusterName:multinode-127000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.18 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewe
r:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:09:50.033135    5402 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:09:50.046911    5402 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0718 21:09:50.055273    5402 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0718 21:09:50.055282    5402 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0718 21:09:50.055286    5402 command_runner.go:130] > /var/lib/minikube/etcd:
	I0718 21:09:50.055289    5402 command_runner.go:130] > member
	I0718 21:09:50.055366    5402 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0718 21:09:50.055377    5402 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0718 21:09:50.055418    5402 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0718 21:09:50.063600    5402 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:09:50.063910    5402 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-127000" does not appear in /Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 21:09:50.063998    5402 kubeconfig.go:62] /Users/jenkins/minikube-integration/19302-1411/kubeconfig needs updating (will repair): [kubeconfig missing "multinode-127000" cluster setting kubeconfig missing "multinode-127000" context setting]
	I0718 21:09:50.064200    5402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1411/kubeconfig: {Name:mk98b5ca4921c9b1e25bd07d5b44b266493ad1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:09:50.064773    5402 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 21:09:50.064994    5402 kapi.go:59] client config for multinode-127000: &rest.Config{Host:"https://192.169.0.17:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/client.key", CAFile:"/Users/jenkins/minikube-integration/19302-1411/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextP
rotos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x476bba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0718 21:09:50.065324    5402 cert_rotation.go:137] Starting client certificate rotation controller
	I0718 21:09:50.065495    5402 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0718 21:09:50.073560    5402 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.17
	I0718 21:09:50.073575    5402 kubeadm.go:1160] stopping kube-system containers ...
	I0718 21:09:50.073644    5402 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0718 21:09:50.088335    5402 command_runner.go:130] > 6396364b3e0e
	I0718 21:09:50.088346    5402 command_runner.go:130] > 1368162a8f09
	I0718 21:09:50.088349    5402 command_runner.go:130] > dff6311790e5
	I0718 21:09:50.088352    5402 command_runner.go:130] > d4dc52db5a77
	I0718 21:09:50.088355    5402 command_runner.go:130] > 3e7cc50a2d57
	I0718 21:09:50.088358    5402 command_runner.go:130] > e12c9aa28fc6
	I0718 21:09:50.088361    5402 command_runner.go:130] > e8e8bc7a035c
	I0718 21:09:50.088365    5402 command_runner.go:130] > f3dc8d1aa918
	I0718 21:09:50.088368    5402 command_runner.go:130] > fda4eb380979
	I0718 21:09:50.088371    5402 command_runner.go:130] > f8a7e04c5c8e
	I0718 21:09:50.088374    5402 command_runner.go:130] > 35aa60e7a3f8
	I0718 21:09:50.088377    5402 command_runner.go:130] > 539be4bab7a7
	I0718 21:09:50.088380    5402 command_runner.go:130] > 26653cd0d581
	I0718 21:09:50.088383    5402 command_runner.go:130] > 1acc8e66b837
	I0718 21:09:50.088389    5402 command_runner.go:130] > ac73727fe777
	I0718 21:09:50.088393    5402 command_runner.go:130] > 579e883db8c6
	I0718 21:09:50.088396    5402 command_runner.go:130] > a77ea521ac99
	I0718 21:09:50.088407    5402 command_runner.go:130] > 743f61bb3d97
	I0718 21:09:50.088411    5402 command_runner.go:130] > 7eaf0af2d35e
	I0718 21:09:50.088415    5402 command_runner.go:130] > 2a8b01139615
	I0718 21:09:50.088418    5402 command_runner.go:130] > 995c0513497e
	I0718 21:09:50.088423    5402 command_runner.go:130] > 6ca51eca7060
	I0718 21:09:50.088427    5402 command_runner.go:130] > f3a95fa340e8
	I0718 21:09:50.088430    5402 command_runner.go:130] > 6d37e86392a7
	I0718 21:09:50.088434    5402 command_runner.go:130] > 7ed33e97b1ef
	I0718 21:09:50.088437    5402 command_runner.go:130] > 75b247638cc4
	I0718 21:09:50.088441    5402 command_runner.go:130] > 10560abb7f24
	I0718 21:09:50.088445    5402 command_runner.go:130] > f0d043288f29
	I0718 21:09:50.088448    5402 command_runner.go:130] > f12144ab85e8
	I0718 21:09:50.088451    5402 command_runner.go:130] > 94b0a5483d84
	I0718 21:09:50.088454    5402 command_runner.go:130] > cc26bfa07489
	I0718 21:09:50.088890    5402 docker.go:483] Stopping containers: [6396364b3e0e 1368162a8f09 dff6311790e5 d4dc52db5a77 3e7cc50a2d57 e12c9aa28fc6 e8e8bc7a035c f3dc8d1aa918 fda4eb380979 f8a7e04c5c8e 35aa60e7a3f8 539be4bab7a7 26653cd0d581 1acc8e66b837 ac73727fe777 579e883db8c6 a77ea521ac99 743f61bb3d97 7eaf0af2d35e 2a8b01139615 995c0513497e 6ca51eca7060 f3a95fa340e8 6d37e86392a7 7ed33e97b1ef 75b247638cc4 10560abb7f24 f0d043288f29 f12144ab85e8 94b0a5483d84 cc26bfa07489]
	I0718 21:09:50.088966    5402 ssh_runner.go:195] Run: docker stop 6396364b3e0e 1368162a8f09 dff6311790e5 d4dc52db5a77 3e7cc50a2d57 e12c9aa28fc6 e8e8bc7a035c f3dc8d1aa918 fda4eb380979 f8a7e04c5c8e 35aa60e7a3f8 539be4bab7a7 26653cd0d581 1acc8e66b837 ac73727fe777 579e883db8c6 a77ea521ac99 743f61bb3d97 7eaf0af2d35e 2a8b01139615 995c0513497e 6ca51eca7060 f3a95fa340e8 6d37e86392a7 7ed33e97b1ef 75b247638cc4 10560abb7f24 f0d043288f29 f12144ab85e8 94b0a5483d84 cc26bfa07489
	I0718 21:09:50.101792    5402 command_runner.go:130] > 6396364b3e0e
	I0718 21:09:50.101804    5402 command_runner.go:130] > 1368162a8f09
	I0718 21:09:50.101807    5402 command_runner.go:130] > dff6311790e5
	I0718 21:09:50.102884    5402 command_runner.go:130] > d4dc52db5a77
	I0718 21:09:50.105199    5402 command_runner.go:130] > 3e7cc50a2d57
	I0718 21:09:50.105339    5402 command_runner.go:130] > e12c9aa28fc6
	I0718 21:09:50.105425    5402 command_runner.go:130] > e8e8bc7a035c
	I0718 21:09:50.105466    5402 command_runner.go:130] > f3dc8d1aa918
	I0718 21:09:50.106091    5402 command_runner.go:130] > fda4eb380979
	I0718 21:09:50.106097    5402 command_runner.go:130] > f8a7e04c5c8e
	I0718 21:09:50.106100    5402 command_runner.go:130] > 35aa60e7a3f8
	I0718 21:09:50.106103    5402 command_runner.go:130] > 539be4bab7a7
	I0718 21:09:50.106106    5402 command_runner.go:130] > 26653cd0d581
	I0718 21:09:50.106109    5402 command_runner.go:130] > 1acc8e66b837
	I0718 21:09:50.106112    5402 command_runner.go:130] > ac73727fe777
	I0718 21:09:50.106115    5402 command_runner.go:130] > 579e883db8c6
	I0718 21:09:50.106118    5402 command_runner.go:130] > a77ea521ac99
	I0718 21:09:50.106122    5402 command_runner.go:130] > 743f61bb3d97
	I0718 21:09:50.106126    5402 command_runner.go:130] > 7eaf0af2d35e
	I0718 21:09:50.106157    5402 command_runner.go:130] > 2a8b01139615
	I0718 21:09:50.106213    5402 command_runner.go:130] > 995c0513497e
	I0718 21:09:50.106220    5402 command_runner.go:130] > 6ca51eca7060
	I0718 21:09:50.106223    5402 command_runner.go:130] > f3a95fa340e8
	I0718 21:09:50.106226    5402 command_runner.go:130] > 6d37e86392a7
	I0718 21:09:50.106238    5402 command_runner.go:130] > 7ed33e97b1ef
	I0718 21:09:50.106242    5402 command_runner.go:130] > 75b247638cc4
	I0718 21:09:50.106245    5402 command_runner.go:130] > 10560abb7f24
	I0718 21:09:50.106248    5402 command_runner.go:130] > f0d043288f29
	I0718 21:09:50.106252    5402 command_runner.go:130] > f12144ab85e8
	I0718 21:09:50.106258    5402 command_runner.go:130] > 94b0a5483d84
	I0718 21:09:50.106261    5402 command_runner.go:130] > cc26bfa07489
	I0718 21:09:50.107063    5402 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0718 21:09:50.120194    5402 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0718 21:09:50.128584    5402 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0718 21:09:50.128595    5402 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0718 21:09:50.128601    5402 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0718 21:09:50.128606    5402 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:09:50.128625    5402 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0718 21:09:50.128630    5402 kubeadm.go:157] found existing configuration files:
	
	I0718 21:09:50.128678    5402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0718 21:09:50.137133    5402 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:09:50.137153    5402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0718 21:09:50.137193    5402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0718 21:09:50.145270    5402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0718 21:09:50.153173    5402 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:09:50.153199    5402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0718 21:09:50.153241    5402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0718 21:09:50.161259    5402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0718 21:09:50.168976    5402 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:09:50.169002    5402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0718 21:09:50.169035    5402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0718 21:09:50.177124    5402 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0718 21:09:50.184903    5402 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:09:50.184923    5402 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0718 21:09:50.184963    5402 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0718 21:09:50.193193    5402 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0718 21:09:50.201432    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:50.264206    5402 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0718 21:09:50.264376    5402 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0718 21:09:50.264550    5402 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0718 21:09:50.264702    5402 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0718 21:09:50.264936    5402 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0718 21:09:50.265099    5402 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0718 21:09:50.265438    5402 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0718 21:09:50.265603    5402 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0718 21:09:50.265807    5402 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0718 21:09:50.265968    5402 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0718 21:09:50.266129    5402 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0718 21:09:50.266362    5402 command_runner.go:130] > [certs] Using the existing "sa" key
	I0718 21:09:50.267280    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:50.305508    5402 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0718 21:09:50.512259    5402 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0718 21:09:50.682912    5402 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0718 21:09:50.850952    5402 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0718 21:09:51.139031    5402 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0718 21:09:51.231479    5402 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0718 21:09:51.233315    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:51.287873    5402 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0718 21:09:51.288489    5402 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0718 21:09:51.288539    5402 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0718 21:09:51.392192    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:51.451968    5402 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0718 21:09:51.451996    5402 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0718 21:09:51.453897    5402 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0718 21:09:51.454650    5402 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0718 21:09:51.455902    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:51.510173    5402 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0718 21:09:51.525091    5402 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:09:51.525156    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:52.025288    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:52.526915    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:53.025265    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:53.527315    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:09:53.539281    5402 command_runner.go:130] > 1628
	I0718 21:09:53.539501    5402 api_server.go:72] duration metric: took 2.014357194s to wait for apiserver process to appear ...
	I0718 21:09:53.539510    5402 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:09:53.539526    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:56.800670    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0718 21:09:56.800686    5402 api_server.go:103] status: https://192.169.0.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0718 21:09:56.800702    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:56.809886    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0718 21:09:56.809907    5402 api_server.go:103] status: https://192.169.0.17:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0718 21:09:57.039841    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:57.044192    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0718 21:09:57.044206    5402 api_server.go:103] status: https://192.169.0.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0718 21:09:57.539786    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:57.546110    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0718 21:09:57.546132    5402 api_server.go:103] status: https://192.169.0.17:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0718 21:09:58.041719    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:09:58.045667    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 200:
	ok
	I0718 21:09:58.045727    5402 round_trippers.go:463] GET https://192.169.0.17:8443/version
	I0718 21:09:58.045732    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:58.045740    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:58.045743    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:58.055667    5402 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0718 21:09:58.055680    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:58.055685    5402 round_trippers.go:580]     Audit-Id: 08c448ae-e25a-4c29-b867-a5570bd6aee8
	I0718 21:09:58.055688    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:58.055691    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:58.055693    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:58.055695    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:58.055704    5402 round_trippers.go:580]     Content-Length: 263
	I0718 21:09:58.055706    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:58 GMT
	I0718 21:09:58.055726    5402 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0718 21:09:58.055779    5402 api_server.go:141] control plane version: v1.30.3
	I0718 21:09:58.055794    5402 api_server.go:131] duration metric: took 4.51614154s to wait for apiserver health ...
	I0718 21:09:58.055801    5402 cni.go:84] Creating CNI manager for ""
	I0718 21:09:58.055804    5402 cni.go:136] multinode detected (2 nodes found), recommending kindnet
	I0718 21:09:58.079155    5402 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0718 21:09:58.115320    5402 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0718 21:09:58.120934    5402 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0718 21:09:58.120950    5402 command_runner.go:130] >   Size: 2785880   	Blocks: 5448       IO Block: 4096   regular file
	I0718 21:09:58.120956    5402 command_runner.go:130] > Device: 0,17	Inode: 3497        Links: 1
	I0718 21:09:58.120962    5402 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0718 21:09:58.120965    5402 command_runner.go:130] > Access: 2024-07-19 04:09:17.239306511 +0000
	I0718 21:09:58.120970    5402 command_runner.go:130] > Modify: 2024-07-18 23:04:21.000000000 +0000
	I0718 21:09:58.120974    5402 command_runner.go:130] > Change: 2024-07-19 04:09:15.712306616 +0000
	I0718 21:09:58.120978    5402 command_runner.go:130] >  Birth: -
	I0718 21:09:58.121033    5402 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.3/kubectl ...
	I0718 21:09:58.121040    5402 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0718 21:09:58.148946    5402 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.3/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0718 21:09:58.749336    5402 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0718 21:09:58.749351    5402 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0718 21:09:58.749356    5402 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0718 21:09:58.749360    5402 command_runner.go:130] > daemonset.apps/kindnet configured
	I0718 21:09:58.749400    5402 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 21:09:58.749454    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:09:58.749459    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:58.749465    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:58.749469    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:58.755364    5402 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0718 21:09:58.755377    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:58.755383    5402 round_trippers.go:580]     Audit-Id: 09fbb9ca-7140-4f56-8e7f-3d3135537de8
	I0718 21:09:58.755385    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:58.755388    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:58.755391    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:58.755403    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:58.755406    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:58 GMT
	I0718 21:09:58.756887    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1161"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87618 chars]
	I0718 21:09:58.759964    5402 system_pods.go:59] 12 kube-system pods found
	I0718 21:09:58.759979    5402 system_pods.go:61] "coredns-7db6d8ff4d-76x8d" [55e9cca6-f3d6-4b2f-a8de-df91db8e186a] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0718 21:09:58.759984    5402 system_pods.go:61] "etcd-multinode-127000" [4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0718 21:09:58.759989    5402 system_pods.go:61] "kindnet-28cb8" [f603b4ff-800e-40e6-9c53-20626c4dfd35] Running
	I0718 21:09:58.759992    5402 system_pods.go:61] "kindnet-ks8xk" [358f14a8-284b-4570-96d1-d519f18269fa] Running
	I0718 21:09:58.759995    5402 system_pods.go:61] "kindnet-lt5bk" [f81f29e6-917b-4347-ad73-aa9b51320b17] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0718 21:09:58.759998    5402 system_pods.go:61] "kube-apiserver-multinode-127000" [15bce3aa-75a4-4cca-beec-20a4eeed2c14] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0718 21:09:58.760005    5402 system_pods.go:61] "kube-controller-manager-multinode-127000" [38250320-d12a-418f-867a-05a82f4f876c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0718 21:09:58.760014    5402 system_pods.go:61] "kube-proxy-8j597" [51e85da8-2b18-4373-8f84-65ed52d6bc13] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0718 21:09:58.760017    5402 system_pods.go:61] "kube-proxy-8nvff" [4b740c91-be18-4bc8-9698-0b4fbda8695e] Running
	I0718 21:09:58.760023    5402 system_pods.go:61] "kube-proxy-nxf5m" [e48c420f-b1a1-4a9e-bc7e-fa0d640e5764] Running
	I0718 21:09:58.760027    5402 system_pods.go:61] "kube-scheduler-multinode-127000" [3060259c-364e-4c24-ae43-107cc1973705] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0718 21:09:58.760038    5402 system_pods.go:61] "storage-provisioner" [cd072b88-33f2-4988-985a-f1a00f8eb449] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0718 21:09:58.760043    5402 system_pods.go:74] duration metric: took 10.63518ms to wait for pod list to return data ...
	I0718 21:09:58.760050    5402 node_conditions.go:102] verifying NodePressure condition ...
	I0718 21:09:58.760089    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes
	I0718 21:09:58.760093    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:58.760099    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:58.760102    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:58.761803    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:58.761812    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:58.761817    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:58.761820    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:58.761823    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:58.761826    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:58 GMT
	I0718 21:09:58.761829    5402 round_trippers.go:580]     Audit-Id: c878751b-2510-4c40-b234-3b903dee2914
	I0718 21:09:58.761832    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:58.761921    5402 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1161"},"items":[{"metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10158 chars]
	I0718 21:09:58.762371    5402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 21:09:58.762383    5402 node_conditions.go:123] node cpu capacity is 2
	I0718 21:09:58.762391    5402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 21:09:58.762394    5402 node_conditions.go:123] node cpu capacity is 2
	I0718 21:09:58.762400    5402 node_conditions.go:105] duration metric: took 2.343982ms to run NodePressure ...
	I0718 21:09:58.762409    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.3:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0718 21:09:58.922323    5402 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0718 21:09:59.013139    5402 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0718 21:09:59.014290    5402 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0718 21:09:59.014353    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%!D(MISSING)control-plane
	I0718 21:09:59.014359    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.014364    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.014369    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.016141    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.016151    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.016156    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.016175    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.016180    5402 round_trippers.go:580]     Audit-Id: fe48c28c-9761-420d-abcb-aa1ce4ad0881
	I0718 21:09:59.016185    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.016188    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.016195    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.016601    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1166"},"items":[{"metadata":{"name":"etcd-multinode-127000","namespace":"kube-system","uid":"4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88","resourceVersion":"1155","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.17:2379","kubernetes.io/config.hash":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.mirror":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotation
s":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-client-urls":{},"f:k [truncated 30919 chars]
	I0718 21:09:59.018498    5402 kubeadm.go:739] kubelet initialised
	I0718 21:09:59.018510    5402 kubeadm.go:740] duration metric: took 4.210532ms waiting for restarted kubelet to initialise ...
	I0718 21:09:59.018517    5402 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 21:09:59.018555    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:09:59.018561    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.018571    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.018576    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.020778    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:09:59.020794    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.020804    5402 round_trippers.go:580]     Audit-Id: a7c125f4-6813-4db9-9dcd-79a0c4aa4f02
	I0718 21:09:59.020811    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.020818    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.020823    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.020828    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.020833    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.021541    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1166"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 87025 chars]
	I0718 21:09:59.023364    5402 pod_ready.go:78] waiting up to 4m0s for pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.023407    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:09:59.023412    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.023418    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.023422    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.024748    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.024754    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.024759    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.024762    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.024766    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.024769    5402 round_trippers.go:580]     Audit-Id: 2b8cf813-83e9-413c-9f19-1eb85b059e7f
	I0718 21:09:59.024772    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.024774    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.025030    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:09:59.025270    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.025277    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.025283    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.025285    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.026373    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.026379    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.026384    5402 round_trippers.go:580]     Audit-Id: 42e96ccd-f10e-4e97-9b84-5df50b2079da
	I0718 21:09:59.026389    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.026395    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.026399    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.026403    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.026405    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.026693    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.026875    5402 pod_ready.go:97] node "multinode-127000" hosting pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.026885    5402 pod_ready.go:81] duration metric: took 3.510264ms for pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.026891    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.026898    5402 pod_ready.go:78] waiting up to 4m0s for pod "etcd-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.026926    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-127000
	I0718 21:09:59.026931    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.026936    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.026941    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.028065    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.028088    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.028117    5402 round_trippers.go:580]     Audit-Id: e0299f71-819f-4f69-baf4-c57220127541
	I0718 21:09:59.028126    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.028131    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.028139    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.028142    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.028145    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.028366    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-127000","namespace":"kube-system","uid":"4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88","resourceVersion":"1155","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.17:2379","kubernetes.io/config.hash":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.mirror":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6582 chars]
	I0718 21:09:59.028584    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.028591    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.028597    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.028601    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.029813    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.029822    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.029829    5402 round_trippers.go:580]     Audit-Id: 4d346494-cdd6-4318-9ed0-3ed37e0fccbb
	I0718 21:09:59.029834    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.029838    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.029841    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.029846    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.029850    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.029953    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.030124    5402 pod_ready.go:97] node "multinode-127000" hosting pod "etcd-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.030134    5402 pod_ready.go:81] duration metric: took 3.229729ms for pod "etcd-multinode-127000" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.030139    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "etcd-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.030148    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.030175    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-127000
	I0718 21:09:59.030180    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.030185    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.030188    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.031210    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.031219    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.031226    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.031230    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.031253    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.031262    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.031267    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.031272    5402 round_trippers.go:580]     Audit-Id: 34ce49d3-e90b-480b-b73f-33bce15e14d5
	I0718 21:09:59.031372    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-127000","namespace":"kube-system","uid":"15bce3aa-75a4-4cca-beec-20a4eeed2c14","resourceVersion":"1154","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.17:8443","kubernetes.io/config.hash":"adeddd763cb12ff26454c97d2cb34645","kubernetes.io/config.mirror":"adeddd763cb12ff26454c97d2cb34645","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265837Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 8136 chars]
	I0718 21:09:59.031621    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.031630    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.031635    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.031640    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.032732    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.032739    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.032743    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.032747    5402 round_trippers.go:580]     Audit-Id: 1b313689-f17e-4acc-a63c-87d3a4f5018f
	I0718 21:09:59.032750    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.032753    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.032756    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.032759    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.032952    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.033106    5402 pod_ready.go:97] node "multinode-127000" hosting pod "kube-apiserver-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.033115    5402 pod_ready.go:81] duration metric: took 2.961848ms for pod "kube-apiserver-multinode-127000" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.033121    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "kube-apiserver-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.033130    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.033158    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-127000
	I0718 21:09:59.033163    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.033168    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.033173    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.034245    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:09:59.034253    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.034258    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.034274    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.034282    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.034286    5402 round_trippers.go:580]     Audit-Id: 4c4d07ff-aa50-4ff1-b585-9aea6e7b35e8
	I0718 21:09:59.034289    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.034341    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.034437    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-127000","namespace":"kube-system","uid":"38250320-d12a-418f-867a-05a82f4f876c","resourceVersion":"1157","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"14d5cf7b26b6a66b49878f0b6b5873c6","kubernetes.io/config.mirror":"14d5cf7b26b6a66b49878f0b6b5873c6","kubernetes.io/config.seen":"2024-07-19T04:02:50.143266437Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7729 chars]
	I0718 21:09:59.149980    5402 request.go:629] Waited for 115.176944ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.150034    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.150045    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.150056    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.150065    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.152642    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:09:59.152657    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.152665    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.152684    5402 round_trippers.go:580]     Audit-Id: 92b155d6-d080-4b49-9905-169d50ccf694
	I0718 21:09:59.152696    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.152709    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.152713    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.152718    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.152808    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.153064    5402 pod_ready.go:97] node "multinode-127000" hosting pod "kube-controller-manager-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.153078    5402 pod_ready.go:81] duration metric: took 119.9375ms for pod "kube-controller-manager-multinode-127000" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.153086    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "kube-controller-manager-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.153092    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8j597" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.350402    5402 request.go:629] Waited for 197.166967ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8j597
	I0718 21:09:59.350472    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8j597
	I0718 21:09:59.350482    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.350495    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.350502    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.352964    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:09:59.352977    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.352984    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.352988    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.352991    5402 round_trippers.go:580]     Audit-Id: 2292ad80-0425-4b73-937c-fa5ab7918a27
	I0718 21:09:59.352994    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.352999    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.353002    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.353145    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8j597","generateName":"kube-proxy-","namespace":"kube-system","uid":"51e85da8-2b18-4373-8f84-65ed52d6bc13","resourceVersion":"1162","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0718 21:09:59.550218    5402 request.go:629] Waited for 196.67273ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.550333    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:09:59.550344    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.550355    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.550362    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.553436    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:09:59.553447    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.553452    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.553455    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.553457    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.553460    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.553462    5402 round_trippers.go:580]     Audit-Id: c5aed568-9d13-48da-ad3f-42d7d640129e
	I0718 21:09:59.553465    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.553552    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:09:59.553761    5402 pod_ready.go:97] node "multinode-127000" hosting pod "kube-proxy-8j597" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.553771    5402 pod_ready.go:81] duration metric: took 400.661484ms for pod "kube-proxy-8j597" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.553779    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "kube-proxy-8j597" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:09:59.553784    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-8nvff" in "kube-system" namespace to be "Ready" ...
	I0718 21:09:59.750143    5402 request.go:629] Waited for 196.283805ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nvff
	I0718 21:09:59.750331    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nvff
	I0718 21:09:59.750341    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.750353    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.750360    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.753003    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:09:59.753031    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.753044    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:09:59 GMT
	I0718 21:09:59.753052    5402 round_trippers.go:580]     Audit-Id: fffc89da-331d-4203-86d7-6713e44e73fb
	I0718 21:09:59.753056    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.753061    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.753065    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.753068    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.753221    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8nvff","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b740c91-be18-4bc8-9698-0b4fbda8695e","resourceVersion":"1110","creationTimestamp":"2024-07-19T04:04:35Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:04:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0718 21:09:59.949756    5402 request.go:629] Waited for 196.192963ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m03
	I0718 21:09:59.949875    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m03
	I0718 21:09:59.949886    5402 round_trippers.go:469] Request Headers:
	I0718 21:09:59.949898    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:09:59.949905    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:09:59.952270    5402 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0718 21:09:59.952285    5402 round_trippers.go:577] Response Headers:
	I0718 21:09:59.952292    5402 round_trippers.go:580]     Audit-Id: ef29f1e3-1adf-4801-baa4-6382d1ffb9f1
	I0718 21:09:59.952297    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:09:59.952301    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:09:59.952304    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:09:59.952308    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:09:59.952312    5402 round_trippers.go:580]     Content-Length: 210
	I0718 21:09:59.952316    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:09:59.952329    5402 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-127000-m03\" not found","reason":"NotFound","details":{"name":"multinode-127000-m03","kind":"nodes"},"code":404}
	I0718 21:09:59.952482    5402 pod_ready.go:97] node "multinode-127000-m03" hosting pod "kube-proxy-8nvff" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-127000-m03": nodes "multinode-127000-m03" not found
	I0718 21:09:59.952495    5402 pod_ready.go:81] duration metric: took 398.694196ms for pod "kube-proxy-8nvff" in "kube-system" namespace to be "Ready" ...
	E0718 21:09:59.952503    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000-m03" hosting pod "kube-proxy-8nvff" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-127000-m03": nodes "multinode-127000-m03" not found
	I0718 21:09:59.952510    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nxf5m" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:00.150340    5402 request.go:629] Waited for 197.77869ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxf5m
	I0718 21:10:00.150459    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxf5m
	I0718 21:10:00.150471    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.150490    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.150501    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:00.153318    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:00.153333    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:00.153339    5402 round_trippers.go:580]     Audit-Id: 737c0b60-d293-42db-a91b-00657bd68555
	I0718 21:10:00.153345    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:00.153349    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:00.153354    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:00.153359    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:00.153364    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:10:00.153524    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nxf5m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e48c420f-b1a1-4a9e-bc7e-fa0d640e5764","resourceVersion":"993","creationTimestamp":"2024-07-19T04:03:47Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0718 21:10:00.350603    5402 request.go:629] Waited for 196.732329ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m02
	I0718 21:10:00.350656    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m02
	I0718 21:10:00.350667    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.350735    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:00.350747    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.353211    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:00.353233    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:00.353243    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:00.353256    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:00.353260    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:10:00.353267    5402 round_trippers.go:580]     Audit-Id: fc54a3f1-52b1-48ef-bddf-98c3520948a3
	I0718 21:10:00.353272    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:00.353280    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:00.353368    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000-m02","uid":"7e73463a-ae2d-4a9c-a2b8-e12809583e97","resourceVersion":"1019","creationTimestamp":"2024-07-19T04:07:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_18T21_07_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0718 21:10:00.353604    5402 pod_ready.go:92] pod "kube-proxy-nxf5m" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:00.353615    5402 pod_ready.go:81] duration metric: took 401.085363ms for pod "kube-proxy-nxf5m" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:00.353623    5402 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:00.551225    5402 request.go:629] Waited for 197.401201ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-127000
	I0718 21:10:00.551272    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-127000
	I0718 21:10:00.551281    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.551291    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:00.551297    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.553642    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:00.553654    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:00.553661    5402 round_trippers.go:580]     Audit-Id: 6a4c5928-dec0-4f17-8503-5804b897e380
	I0718 21:10:00.553666    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:00.553669    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:00.553671    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:00.553674    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:00.553678    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:10:00.554054    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-127000","namespace":"kube-system","uid":"3060259c-364e-4c24-ae43-107cc1973705","resourceVersion":"1156","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"746f7833447444339ca9b76cec94dc1f","kubernetes.io/config.mirror":"746f7833447444339ca9b76cec94dc1f","kubernetes.io/config.seen":"2024-07-19T04:02:50.143262549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5439 chars]
	I0718 21:10:00.750848    5402 request.go:629] Waited for 196.463338ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:00.750964    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:00.750973    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.750983    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.750990    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:00.754069    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:00.754095    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:00.754105    5402 round_trippers.go:580]     Audit-Id: fc94a526-9ae7-4e9d-8768-93ea66030c7f
	I0718 21:10:00.754114    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:00.754122    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:00.754127    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:00.754135    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:00.754142    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:00 GMT
	I0718 21:10:00.754476    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:00.754742    5402 pod_ready.go:97] node "multinode-127000" hosting pod "kube-scheduler-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:10:00.754756    5402 pod_ready.go:81] duration metric: took 401.099374ms for pod "kube-scheduler-multinode-127000" in "kube-system" namespace to be "Ready" ...
	E0718 21:10:00.754764    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000" hosting pod "kube-scheduler-multinode-127000" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-127000" has status "Ready":"False"
	I0718 21:10:00.754771    5402 pod_ready.go:38] duration metric: took 1.736195789s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 21:10:00.754789    5402 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0718 21:10:00.764551    5402 command_runner.go:130] > -16
	I0718 21:10:00.764759    5402 ops.go:34] apiserver oom_adj: -16
	I0718 21:10:00.764767    5402 kubeadm.go:597] duration metric: took 10.709068229s to restartPrimaryControlPlane
	I0718 21:10:00.764772    5402 kubeadm.go:394] duration metric: took 10.731426361s to StartCluster
	I0718 21:10:00.764780    5402 settings.go:142] acquiring lock: {Name:mk3b26f3c8475777a106e604fcaf3d840de0df1c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:10:00.764869    5402 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 21:10:00.765287    5402 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1411/kubeconfig: {Name:mk98b5ca4921c9b1e25bd07d5b44b266493ad1f7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:10:00.765647    5402 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.17 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:10:00.765667    5402 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0718 21:10:00.765773    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:00.787226    5402 out.go:177] * Verifying Kubernetes components...
	I0718 21:10:00.828712    5402 out.go:177] * Enabled addons: 
	I0718 21:10:00.850006    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:10:00.870886    5402 addons.go:510] duration metric: took 105.22099ms for enable addons: enabled=[]
	I0718 21:10:00.988295    5402 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0718 21:10:00.998793    5402 node_ready.go:35] waiting up to 6m0s for node "multinode-127000" to be "Ready" ...
	I0718 21:10:00.998857    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:00.998863    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:00.998869    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:00.998872    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:01.000235    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:01.000247    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:01.000252    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:01.000256    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:01.000260    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:01.000262    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:01 GMT
	I0718 21:10:01.000265    5402 round_trippers.go:580]     Audit-Id: c91c7da9-6269-4b34-b4be-4cd8871e35cc
	I0718 21:10:01.000268    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:01.000382    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:01.499220    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:01.499247    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:01.499258    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:01.499266    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:01.501785    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:01.501799    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:01.501806    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:01.501811    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:01.501825    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:01 GMT
	I0718 21:10:01.501829    5402 round_trippers.go:580]     Audit-Id: dbaf39f6-96c6-4114-8599-940dc95bcc23
	I0718 21:10:01.501833    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:01.501838    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:01.502028    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:01.999922    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:01.999957    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:02.000051    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:02.000059    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:02.002321    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:02.002336    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:02.002344    5402 round_trippers.go:580]     Audit-Id: 0c925959-8590-4395-a787-940865682434
	I0718 21:10:02.002348    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:02.002353    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:02.002358    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:02.002371    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:02.002377    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:02 GMT
	I0718 21:10:02.002444    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:02.500082    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:02.500111    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:02.500123    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:02.500129    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:02.502519    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:02.502535    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:02.502543    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:02.502546    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:02.502564    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:02.502567    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:02 GMT
	I0718 21:10:02.502571    5402 round_trippers.go:580]     Audit-Id: e5ec95b5-0443-4403-b4e9-454bb3d63920
	I0718 21:10:02.502581    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:02.502937    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:02.999587    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:02.999612    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:02.999624    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:02.999632    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:03.002020    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:03.002034    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:03.002040    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:03.002045    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:03.002049    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:03.002053    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:03 GMT
	I0718 21:10:03.002058    5402 round_trippers.go:580]     Audit-Id: fc74dd5c-877c-4af5-a8ff-2ea1c12bd1dd
	I0718 21:10:03.002062    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:03.002217    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:03.002460    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:03.499589    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:03.499613    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:03.499630    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:03.499637    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:03.501937    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:03.501951    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:03.501959    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:03 GMT
	I0718 21:10:03.501964    5402 round_trippers.go:580]     Audit-Id: a6acbb46-f744-4837-a8be-12aaa08ea891
	I0718 21:10:03.501987    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:03.501996    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:03.502004    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:03.502011    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:03.502084    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:04.000364    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:04.000393    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:04.000405    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:04.000412    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:04.003373    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:04.003388    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:04.003395    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:04.003400    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:04.003404    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:04.003408    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:04.003412    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:04 GMT
	I0718 21:10:04.003415    5402 round_trippers.go:580]     Audit-Id: 8217548d-ba6c-4da5-9252-3c3ea223b4bb
	I0718 21:10:04.003509    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:04.500081    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:04.500103    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:04.500116    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:04.500122    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:04.502844    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:04.502858    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:04.502865    5402 round_trippers.go:580]     Audit-Id: b94a00e3-13f1-4bf8-bdf8-8530afcd0d6a
	I0718 21:10:04.502870    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:04.502874    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:04.502877    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:04.502882    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:04.502885    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:04 GMT
	I0718 21:10:04.503138    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:05.000254    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:05.000282    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:05.000293    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:05.000299    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:05.003114    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:05.003129    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:05.003177    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:05.003190    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:05.003194    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:05 GMT
	I0718 21:10:05.003199    5402 round_trippers.go:580]     Audit-Id: 0402c907-e4c7-490b-8a35-c2acc9370318
	I0718 21:10:05.003203    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:05.003209    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:05.003429    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:05.003716    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:05.500345    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:05.500371    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:05.500383    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:05.500388    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:05.503109    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:05.503124    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:05.503131    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:05.503135    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:05 GMT
	I0718 21:10:05.503139    5402 round_trippers.go:580]     Audit-Id: 92cffdf8-66e6-473e-bc0a-e98418c93b41
	I0718 21:10:05.503142    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:05.503147    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:05.503151    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:05.503427    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:05.999203    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:05.999227    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:05.999239    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:05.999245    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:06.001628    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:06.001666    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:06.001700    5402 round_trippers.go:580]     Audit-Id: dda60b80-bc38-4d54-b754-ade4914707bc
	I0718 21:10:06.001706    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:06.001711    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:06.001716    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:06.001721    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:06.001725    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:06 GMT
	I0718 21:10:06.001988    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:06.500146    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:06.500168    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:06.500180    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:06.500188    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:06.502722    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:06.502735    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:06.502742    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:06.502747    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:06.502750    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:06.502754    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:06.502757    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:06 GMT
	I0718 21:10:06.502763    5402 round_trippers.go:580]     Audit-Id: e72476a2-4955-4590-98b4-195bdf982f06
	I0718 21:10:06.503145    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:07.001293    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:07.001313    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:07.001326    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:07.001334    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:07.003572    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:07.003589    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:07.003597    5402 round_trippers.go:580]     Audit-Id: dd3bccd4-a1ec-47c4-8ae1-4e0dec030e59
	I0718 21:10:07.003603    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:07.003607    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:07.003612    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:07.003617    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:07.003621    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:07 GMT
	I0718 21:10:07.003679    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:07.003918    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:07.501209    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:07.501231    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:07.501244    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:07.501252    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:07.503791    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:07.503805    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:07.503812    5402 round_trippers.go:580]     Audit-Id: c8c996ba-ef58-4679-bfaf-63100839349e
	I0718 21:10:07.503816    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:07.503820    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:07.503854    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:07.503864    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:07.503869    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:07 GMT
	I0718 21:10:07.503981    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:08.000606    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:08.000628    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:08.000640    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:08.000646    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:08.003326    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:08.003341    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:08.003347    5402 round_trippers.go:580]     Audit-Id: aa0fc8ef-4ca2-4c3a-a6e5-de1f6a5c7b98
	I0718 21:10:08.003352    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:08.003355    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:08.003358    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:08.003362    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:08.003365    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:08 GMT
	I0718 21:10:08.003540    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:08.499281    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:08.499304    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:08.499316    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:08.499322    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:08.501546    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:08.501562    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:08.501570    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:08.501574    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:08.501577    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:08.501580    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:08 GMT
	I0718 21:10:08.501583    5402 round_trippers.go:580]     Audit-Id: 6b65fa20-5a8e-4e8e-ab77-1a8d8b2ae467
	I0718 21:10:08.501587    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:08.501703    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:08.999285    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:08.999298    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:08.999304    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:08.999307    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:09.001033    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:09.001043    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:09.001048    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:09.001052    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:09.001054    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:09.001058    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:09.001063    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:09 GMT
	I0718 21:10:09.001068    5402 round_trippers.go:580]     Audit-Id: a7804ac7-bf5f-47ea-a40e-7021c1ed87d4
	I0718 21:10:09.001279    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:09.500127    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:09.500142    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:09.500151    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:09.500155    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:09.502148    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:09.502157    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:09.502166    5402 round_trippers.go:580]     Audit-Id: 57b0beab-cef3-47b9-888d-752a442affd8
	I0718 21:10:09.502170    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:09.502173    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:09.502177    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:09.502180    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:09.502184    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:09 GMT
	I0718 21:10:09.502308    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1148","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5301 chars]
	I0718 21:10:09.502496    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:10.000748    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:10.000768    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:10.000779    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:10.000786    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:10.003344    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:10.003357    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:10.003363    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:10.003369    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:10.003372    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:10.003376    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:10.003381    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:10 GMT
	I0718 21:10:10.003384    5402 round_trippers.go:580]     Audit-Id: f9f0f7f5-2f1d-43b8-8571-28c20ebd1ea0
	I0718 21:10:10.003541    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:10.501024    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:10.501051    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:10.501064    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:10.501069    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:10.503795    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:10.503811    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:10.503819    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:10.503824    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:10 GMT
	I0718 21:10:10.503828    5402 round_trippers.go:580]     Audit-Id: 2c98e917-8857-4a2e-9840-1a2873539ac7
	I0718 21:10:10.503831    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:10.503835    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:10.503838    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:10.503938    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:11.000695    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:11.000774    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:11.000788    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:11.000793    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:11.003076    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:11.003091    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:11.003098    5402 round_trippers.go:580]     Audit-Id: 69adac19-e2f5-4c8a-acc2-6ab697611c4d
	I0718 21:10:11.003103    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:11.003108    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:11.003112    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:11.003115    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:11.003127    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:11 GMT
	I0718 21:10:11.003248    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:11.500904    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:11.500932    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:11.500947    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:11.501036    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:11.503821    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:11.503839    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:11.503849    5402 round_trippers.go:580]     Audit-Id: a49d3659-82f3-4d60-be9d-4f012d907a20
	I0718 21:10:11.503855    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:11.503860    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:11.503867    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:11.503870    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:11.503873    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:11 GMT
	I0718 21:10:11.504285    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:11.504538    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:11.999956    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:11.999971    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:11.999980    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:11.999985    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:12.001798    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:12.001825    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:12.001831    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:12 GMT
	I0718 21:10:12.001834    5402 round_trippers.go:580]     Audit-Id: 72284ad4-ee5d-4a67-aeec-d5af2f957e17
	I0718 21:10:12.001838    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:12.001840    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:12.001843    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:12.001845    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:12.001899    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:12.500700    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:12.500726    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:12.500736    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:12.500745    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:12.503390    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:12.503404    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:12.503411    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:12.503415    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:12 GMT
	I0718 21:10:12.503419    5402 round_trippers.go:580]     Audit-Id: 13359f05-5fad-4444-8962-6d3a0737f0ee
	I0718 21:10:12.503423    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:12.503428    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:12.503431    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:12.503572    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:12.999356    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:12.999412    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:12.999423    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:12.999427    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:13.001079    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:13.001089    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:13.001094    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:13.001096    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:13.001099    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:13.001102    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:13 GMT
	I0718 21:10:13.001104    5402 round_trippers.go:580]     Audit-Id: 61dd367b-59e9-421b-b7e4-fa15a7785756
	I0718 21:10:13.001107    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:13.001789    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:13.499737    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:13.499760    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:13.499772    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:13.499779    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:13.502416    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:13.502431    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:13.502439    5402 round_trippers.go:580]     Audit-Id: c49f39bc-c9f9-431f-9fcc-bf73289ae029
	I0718 21:10:13.502443    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:13.502446    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:13.502449    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:13.502453    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:13.502456    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:13 GMT
	I0718 21:10:13.502821    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:13.999639    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:13.999663    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:13.999676    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:13.999682    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:14.002319    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:14.002333    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:14.002340    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:14.002345    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:14.002351    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:14 GMT
	I0718 21:10:14.002354    5402 round_trippers.go:580]     Audit-Id: 1d8d8348-af75-43cc-bb25-5cf4b1b70702
	I0718 21:10:14.002359    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:14.002363    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:14.002692    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:14.002946    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:14.500654    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:14.500683    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:14.500694    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:14.500703    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:14.503639    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:14.503654    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:14.503662    5402 round_trippers.go:580]     Audit-Id: 3601e1a2-f3bf-4906-88aa-54d9dd644e60
	I0718 21:10:14.503670    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:14.503676    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:14.503683    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:14.503697    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:14.503701    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:14 GMT
	I0718 21:10:14.504028    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:15.000251    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:15.000273    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:15.000285    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:15.000291    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:15.002965    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:15.002980    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:15.002987    5402 round_trippers.go:580]     Audit-Id: dd2b4810-1f51-481b-8c18-e75c4450f794
	I0718 21:10:15.002993    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:15.002997    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:15.003002    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:15.003005    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:15.003008    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:15 GMT
	I0718 21:10:15.003096    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:15.499836    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:15.499864    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:15.499955    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:15.499961    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:15.502641    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:15.502655    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:15.502662    5402 round_trippers.go:580]     Audit-Id: 3b6f25b1-8a37-4e9f-a9cb-c6f06cf759cc
	I0718 21:10:15.502667    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:15.502670    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:15.502674    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:15.502678    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:15.502681    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:15 GMT
	I0718 21:10:15.502813    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:15.999947    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:15.999973    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:15.999984    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:15.999991    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:16.002475    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:16.002490    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:16.002497    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:16.002501    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:16.002505    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:16.002508    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:16 GMT
	I0718 21:10:16.002512    5402 round_trippers.go:580]     Audit-Id: f8b75ac1-50e1-4a6c-b823-55c5dd28830d
	I0718 21:10:16.002516    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:16.002681    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:16.500871    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:16.500894    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:16.500906    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:16.500912    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:16.503579    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:16.503634    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:16.503646    5402 round_trippers.go:580]     Audit-Id: b577b064-022f-4f88-b0c4-42d7ba247cbc
	I0718 21:10:16.503651    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:16.503655    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:16.503659    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:16.503663    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:16.503684    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:16 GMT
	I0718 21:10:16.503787    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:16.504036    5402 node_ready.go:53] node "multinode-127000" has status "Ready":"False"
	I0718 21:10:16.999708    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:16.999729    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:16.999740    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:16.999749    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:17.002662    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:17.002679    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:17.002687    5402 round_trippers.go:580]     Audit-Id: e30e1cb9-a52d-4b44-bc39-bb752f99fcb7
	I0718 21:10:17.002691    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:17.002696    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:17.002699    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:17.002704    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:17.002709    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:17 GMT
	I0718 21:10:17.002996    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:17.500523    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:17.500590    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:17.500599    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:17.500605    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:17.503742    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:17.503755    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:17.503760    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:17.503763    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:17.503766    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:17 GMT
	I0718 21:10:17.503769    5402 round_trippers.go:580]     Audit-Id: 97c2b796-5b1e-4c19-97a1-db0da2373f6c
	I0718 21:10:17.503774    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:17.503777    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:17.503837    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1263","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5517 chars]
	I0718 21:10:18.000343    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:18.000369    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.000463    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.000474    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.003552    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:18.003570    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.003581    5402 round_trippers.go:580]     Audit-Id: 30b525d9-35c9-4058-b70b-1441a5ee1fdf
	I0718 21:10:18.003589    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.003594    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.003599    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.003604    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.003610    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.003836    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:18.004077    5402 node_ready.go:49] node "multinode-127000" has status "Ready":"True"
	I0718 21:10:18.004094    5402 node_ready.go:38] duration metric: took 17.004775847s for node "multinode-127000" to be "Ready" ...
	I0718 21:10:18.004102    5402 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 21:10:18.004147    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:18.004153    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.004160    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.004165    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.010372    5402 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 21:10:18.010388    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.010397    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.010404    5402 round_trippers.go:580]     Audit-Id: baa2d715-54c9-4ae9-a895-32b741f94048
	I0718 21:10:18.010409    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.010413    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.010417    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.010425    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.011572    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1282"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86051 chars]
	I0718 21:10:18.013391    5402 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:18.013442    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:18.013448    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.013455    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.013459    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.016264    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:18.016273    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.016278    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.016281    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.016284    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.016287    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.016290    5402 round_trippers.go:580]     Audit-Id: 4cfd4886-bcd3-426c-8ad8-db7910c4ddae
	I0718 21:10:18.016293    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.017214    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:18.017480    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:18.017487    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.017493    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.017496    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.021298    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:18.021310    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.021314    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.021351    5402 round_trippers.go:580]     Audit-Id: 1762a68c-3222-4831-a8bf-4e4ecf0046ec
	I0718 21:10:18.021357    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.021360    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.021362    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.021374    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.021449    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:18.514278    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:18.514298    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.514306    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.514310    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.516847    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:18.516857    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.516862    5402 round_trippers.go:580]     Audit-Id: 8c30ccc7-41ed-49e3-b39d-44bfee4628e0
	I0718 21:10:18.516866    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.516870    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.516873    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.516875    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.516878    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.517189    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:18.517467    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:18.517475    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:18.517480    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:18.517484    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:18.518950    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:18.518958    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:18.518965    5402 round_trippers.go:580]     Audit-Id: 316fd4bd-dbcd-4706-972a-ef0c38aa8baf
	I0718 21:10:18.518969    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:18.518974    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:18.518978    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:18.518984    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:18.518988    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:18 GMT
	I0718 21:10:18.519055    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:19.013907    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:19.013930    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:19.013939    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:19.013945    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:19.017112    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:19.017127    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:19.017137    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:19.017156    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:19 GMT
	I0718 21:10:19.017172    5402 round_trippers.go:580]     Audit-Id: 734cc069-c84e-4e5f-bad8-0a32cd34a4c1
	I0718 21:10:19.017180    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:19.017189    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:19.017194    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:19.017383    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:19.017738    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:19.017748    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:19.017755    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:19.017761    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:19.019077    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:19.019086    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:19.019091    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:19.019094    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:19 GMT
	I0718 21:10:19.019106    5402 round_trippers.go:580]     Audit-Id: 78664a5a-93c4-44c6-aab5-2b1073f8c551
	I0718 21:10:19.019113    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:19.019115    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:19.019119    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:19.019181    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:19.514205    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:19.514228    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:19.514240    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:19.514248    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:19.516881    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:19.516911    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:19.516935    5402 round_trippers.go:580]     Audit-Id: f8192e8b-6d2b-4f3f-ad77-ea71d7a77521
	I0718 21:10:19.516949    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:19.516961    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:19.516985    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:19.516992    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:19.516996    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:19 GMT
	I0718 21:10:19.517176    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:19.517528    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:19.517537    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:19.517545    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:19.517552    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:19.518898    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:19.518905    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:19.518910    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:19.518913    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:19.518916    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:19 GMT
	I0718 21:10:19.518918    5402 round_trippers.go:580]     Audit-Id: 4ff45ed1-6204-40b6-83bc-43b823c845b8
	I0718 21:10:19.518921    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:19.518923    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:19.519104    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1281","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5294 chars]
	I0718 21:10:20.014931    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:20.014953    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:20.014966    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:20.014972    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:20.017821    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:20.017838    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:20.017846    5402 round_trippers.go:580]     Audit-Id: c29d33d6-1ca7-4768-976c-5ba0ee1d7485
	I0718 21:10:20.017850    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:20.017856    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:20.017860    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:20.017865    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:20.017869    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:20 GMT
	I0718 21:10:20.018054    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:20.018419    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:20.018429    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:20.018438    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:20.018443    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:20.020154    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:20.020163    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:20.020169    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:20.020174    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:20.020179    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:20 GMT
	I0718 21:10:20.020187    5402 round_trippers.go:580]     Audit-Id: 62373308-cc01-4a6f-81ff-e1feb672a3da
	I0718 21:10:20.020194    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:20.020199    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:20.020302    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:20.020475    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:20.514938    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:20.514960    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:20.514972    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:20.514979    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:20.517509    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:20.517521    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:20.517528    5402 round_trippers.go:580]     Audit-Id: 5d3bc4d3-6dca-476b-9eff-b4b50e69351a
	I0718 21:10:20.517534    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:20.517541    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:20.517546    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:20.517551    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:20.517557    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:20 GMT
	I0718 21:10:20.517759    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:20.518121    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:20.518131    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:20.518139    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:20.518144    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:20.519532    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:20.519541    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:20.519545    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:20.519549    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:20.519552    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:20.519555    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:20.519565    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:20 GMT
	I0718 21:10:20.519569    5402 round_trippers.go:580]     Audit-Id: 8f634210-671d-412e-a970-d17a05bdcf46
	I0718 21:10:20.519735    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:21.015019    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:21.015041    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:21.015053    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:21.015059    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:21.017754    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:21.017768    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:21.017775    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:21.017779    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:21 GMT
	I0718 21:10:21.017806    5402 round_trippers.go:580]     Audit-Id: 273eaa1c-3bf9-42f7-bba9-89aab9243831
	I0718 21:10:21.017812    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:21.017817    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:21.017822    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:21.017914    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:21.018279    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:21.018288    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:21.018296    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:21.018299    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:21.019691    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:21.019699    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:21.019705    5402 round_trippers.go:580]     Audit-Id: fd16eefa-ec1d-4456-ac2b-a2d4acaf82e1
	I0718 21:10:21.019709    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:21.019714    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:21.019718    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:21.019721    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:21.019723    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:21 GMT
	I0718 21:10:21.019834    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:21.515168    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:21.515190    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:21.515201    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:21.515207    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:21.517972    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:21.517983    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:21.517989    5402 round_trippers.go:580]     Audit-Id: b7909345-ed7d-42f7-86aa-9698a1863426
	I0718 21:10:21.517995    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:21.518000    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:21.518004    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:21.518008    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:21.518013    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:21 GMT
	I0718 21:10:21.518580    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:21.518931    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:21.518941    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:21.518948    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:21.518954    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:21.525254    5402 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0718 21:10:21.525267    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:21.525273    5402 round_trippers.go:580]     Audit-Id: 3ed1f7ae-bf47-46e8-aae6-f1a580abdd5b
	I0718 21:10:21.525276    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:21.525279    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:21.525281    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:21.525283    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:21.525287    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:21 GMT
	I0718 21:10:21.525406    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:22.014693    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:22.014716    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:22.014727    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:22.014733    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:22.017558    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:22.017579    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:22.017590    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:22.017598    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:22.017604    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:22.017609    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:22.017618    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:22 GMT
	I0718 21:10:22.017623    5402 round_trippers.go:580]     Audit-Id: ab23d105-dc2a-4588-94ab-73e13b2ed1c8
	I0718 21:10:22.017821    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:22.018200    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:22.018211    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:22.018219    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:22.018223    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:22.019826    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:22.019834    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:22.019840    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:22.019843    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:22.019846    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:22.019849    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:22.019852    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:22 GMT
	I0718 21:10:22.019855    5402 round_trippers.go:580]     Audit-Id: 18410682-4d22-4ffb-942b-6a7146e12419
	I0718 21:10:22.020155    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:22.514466    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:22.514489    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:22.514501    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:22.514506    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:22.516641    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:22.516655    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:22.516662    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:22.516667    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:22.516697    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:22.516707    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:22.516710    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:22 GMT
	I0718 21:10:22.516714    5402 round_trippers.go:580]     Audit-Id: 90762c29-c0bb-490a-a02f-e2633b59233c
	I0718 21:10:22.516834    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:22.517197    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:22.517207    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:22.517213    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:22.517218    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:22.518636    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:22.518646    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:22.518654    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:22.518680    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:22 GMT
	I0718 21:10:22.518687    5402 round_trippers.go:580]     Audit-Id: 4e2c0d39-d197-4420-a139-ea5f26a16943
	I0718 21:10:22.518691    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:22.518695    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:22.518699    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:22.518926    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:22.519142    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:23.013893    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:23.013914    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:23.013972    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:23.013990    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:23.016656    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:23.016671    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:23.016678    5402 round_trippers.go:580]     Audit-Id: 3bfd7f63-983b-4f98-89a9-63c6391a93f8
	I0718 21:10:23.016682    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:23.016686    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:23.016690    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:23.016696    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:23.016700    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:23 GMT
	I0718 21:10:23.016836    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:23.017197    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:23.017207    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:23.017214    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:23.017219    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:23.018749    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:23.018762    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:23.018768    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:23 GMT
	I0718 21:10:23.018773    5402 round_trippers.go:580]     Audit-Id: e798554c-47e0-48c2-b8a8-97d6cc687aa2
	I0718 21:10:23.018776    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:23.018779    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:23.018784    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:23.018788    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:23.018985    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:23.514483    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:23.514505    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:23.514516    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:23.514546    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:23.517145    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:23.517161    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:23.517168    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:23.517181    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:23.517186    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:23.517189    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:23 GMT
	I0718 21:10:23.517192    5402 round_trippers.go:580]     Audit-Id: 9f2c692d-a134-40eb-a88e-4275bef4ebeb
	I0718 21:10:23.517196    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:23.517303    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:23.517659    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:23.517669    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:23.517677    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:23.517682    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:23.519160    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:23.519169    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:23.519174    5402 round_trippers.go:580]     Audit-Id: 78039b5e-627a-4230-a2ad-d4d98ec03005
	I0718 21:10:23.519183    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:23.519188    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:23.519193    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:23.519197    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:23.519199    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:23 GMT
	I0718 21:10:23.519264    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:24.015894    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:24.015918    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:24.015928    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:24.015934    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:24.018450    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:24.018462    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:24.018470    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:24 GMT
	I0718 21:10:24.018476    5402 round_trippers.go:580]     Audit-Id: ec2616a2-32ba-400e-aa72-1b156ae1be8e
	I0718 21:10:24.018482    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:24.018486    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:24.018492    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:24.018499    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:24.018765    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:24.019125    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:24.019135    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:24.019142    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:24.019147    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:24.020557    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:24.020565    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:24.020570    5402 round_trippers.go:580]     Audit-Id: a0ce9a8e-f05f-493d-bb48-da37b2561ae6
	I0718 21:10:24.020573    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:24.020576    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:24.020578    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:24.020581    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:24.020583    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:24 GMT
	I0718 21:10:24.020684    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:24.514857    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:24.514879    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:24.514891    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:24.514898    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:24.517625    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:24.517645    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:24.517656    5402 round_trippers.go:580]     Audit-Id: b0470f56-149f-4a0f-935d-2c898dad6508
	I0718 21:10:24.517663    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:24.517669    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:24.517672    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:24.517676    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:24.517680    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:24 GMT
	I0718 21:10:24.517871    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:24.518232    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:24.518243    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:24.518251    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:24.518257    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:24.519628    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:24.519637    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:24.519642    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:24.519647    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:24.519650    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:24.519653    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:24 GMT
	I0718 21:10:24.519657    5402 round_trippers.go:580]     Audit-Id: a1bcaf63-ab4b-44ca-9e95-80891120857c
	I0718 21:10:24.519660    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:24.519729    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:24.519902    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:25.015089    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:25.015109    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:25.015120    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:25.015127    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:25.017649    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:25.017660    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:25.017665    5402 round_trippers.go:580]     Audit-Id: 9e88d63b-6297-4eba-88ff-bb00fa210573
	I0718 21:10:25.017669    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:25.017671    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:25.017674    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:25.017676    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:25.017679    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:25 GMT
	I0718 21:10:25.017809    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:25.018174    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:25.018194    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:25.018221    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:25.018228    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:25.020050    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:25.020058    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:25.020063    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:25.020067    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:25 GMT
	I0718 21:10:25.020070    5402 round_trippers.go:580]     Audit-Id: 158c095f-56b9-4462-a047-f38e05ce561a
	I0718 21:10:25.020080    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:25.020083    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:25.020086    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:25.020218    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:25.514354    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:25.514395    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:25.514406    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:25.514413    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:25.516975    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:25.516992    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:25.517002    5402 round_trippers.go:580]     Audit-Id: 479b3a4c-eca7-446f-aae2-037ca1e7e119
	I0718 21:10:25.517009    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:25.517016    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:25.517022    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:25.517028    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:25.517033    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:25 GMT
	I0718 21:10:25.517351    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:25.517720    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:25.517730    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:25.517738    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:25.517743    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:25.519107    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:25.519116    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:25.519123    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:25.519128    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:25.519133    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:25 GMT
	I0718 21:10:25.519136    5402 round_trippers.go:580]     Audit-Id: 35f2cfc3-17ac-4bcb-80b1-a620f655bc0b
	I0718 21:10:25.519139    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:25.519142    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:25.519206    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:26.014046    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:26.014069    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:26.014079    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:26.014085    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:26.017322    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:26.017339    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:26.017350    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:26.017358    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:26.017364    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:26.017370    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:26.017376    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:26 GMT
	I0718 21:10:26.017381    5402 round_trippers.go:580]     Audit-Id: 351c6f96-ef6e-49af-a5b5-6a61b6346526
	I0718 21:10:26.017599    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:26.017977    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:26.017986    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:26.017993    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:26.017998    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:26.019574    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:26.019582    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:26.019590    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:26.019593    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:26.019597    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:26 GMT
	I0718 21:10:26.019600    5402 round_trippers.go:580]     Audit-Id: 0f74dfd0-172c-4d56-83a7-d23eed64839f
	I0718 21:10:26.019604    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:26.019608    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:26.019813    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:26.515902    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:26.515946    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:26.515958    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:26.515964    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:26.519178    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:26.519195    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:26.519202    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:26.519216    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:26.519220    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:26.519224    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:26 GMT
	I0718 21:10:26.519229    5402 round_trippers.go:580]     Audit-Id: 6fea295e-ae8c-4f1a-ab25-d6d0ad3d4426
	I0718 21:10:26.519232    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:26.519623    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:26.520004    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:26.520018    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:26.520025    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:26.520031    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:26.521420    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:26.521427    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:26.521434    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:26.521439    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:26.521443    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:26.521447    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:26.521451    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:26 GMT
	I0718 21:10:26.521457    5402 round_trippers.go:580]     Audit-Id: 0525ca34-f642-4751-9343-3fda7a40f62e
	I0718 21:10:26.521597    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:26.521769    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:27.014394    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:27.014436    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:27.014449    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:27.014456    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:27.017270    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:27.017331    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:27.017345    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:27 GMT
	I0718 21:10:27.017354    5402 round_trippers.go:580]     Audit-Id: a5259704-9e04-4af6-b60c-fc532efa6823
	I0718 21:10:27.017359    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:27.017363    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:27.017368    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:27.017372    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:27.017457    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:27.017820    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:27.017829    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:27.017836    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:27.017842    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:27.019171    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:27.019180    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:27.019185    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:27.019188    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:27.019191    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:27.019195    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:27 GMT
	I0718 21:10:27.019205    5402 round_trippers.go:580]     Audit-Id: 45ca80df-c705-4a43-91c1-8d0c73916b70
	I0718 21:10:27.019208    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:27.019268    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:27.513888    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:27.513899    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:27.513904    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:27.513907    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:27.515580    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:27.515592    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:27.515599    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:27.515616    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:27.515637    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:27.515644    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:27.515648    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:27 GMT
	I0718 21:10:27.515651    5402 round_trippers.go:580]     Audit-Id: 9388a1e8-495e-4588-a9f6-f8600e34fcf5
	I0718 21:10:27.515776    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:27.516053    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:27.516060    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:27.516066    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:27.516070    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:27.517004    5402 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 21:10:27.517013    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:27.517018    5402 round_trippers.go:580]     Audit-Id: c28374f6-eacd-4ff0-be53-ea03af6a450c
	I0718 21:10:27.517021    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:27.517025    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:27.517028    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:27.517043    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:27.517061    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:27 GMT
	I0718 21:10:27.517194    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:28.013901    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:28.013919    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:28.013931    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:28.013939    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:28.016324    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:28.016337    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:28.016345    5402 round_trippers.go:580]     Audit-Id: 281c6eef-9bbf-4af8-bb27-090bb956d97f
	I0718 21:10:28.016349    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:28.016353    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:28.016356    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:28.016360    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:28.016364    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:28 GMT
	I0718 21:10:28.016666    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:28.017049    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:28.017058    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:28.017066    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:28.017071    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:28.018605    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:28.018613    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:28.018622    5402 round_trippers.go:580]     Audit-Id: f83f51e6-3074-45dc-98e6-7a303f6ca8a2
	I0718 21:10:28.018627    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:28.018633    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:28.018636    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:28.018640    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:28.018644    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:28 GMT
	I0718 21:10:28.018756    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:28.513919    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:28.513932    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:28.513939    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:28.513942    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:28.515647    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:28.515655    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:28.515660    5402 round_trippers.go:580]     Audit-Id: aca584cb-87d1-4433-b687-bad0692fbd83
	I0718 21:10:28.515663    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:28.515666    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:28.515670    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:28.515676    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:28.515682    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:28 GMT
	I0718 21:10:28.515918    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:28.516207    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:28.516214    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:28.516219    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:28.516223    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:28.520408    5402 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0718 21:10:28.520417    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:28.520423    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:28.520426    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:28.520429    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:28.520432    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:28.520435    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:28 GMT
	I0718 21:10:28.520437    5402 round_trippers.go:580]     Audit-Id: 7a21bfcf-272b-4f2b-9902-ddff80397717
	I0718 21:10:28.520575    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:29.015146    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:29.015172    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:29.015182    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:29.015188    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:29.017924    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:29.017937    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:29.017943    5402 round_trippers.go:580]     Audit-Id: 147a57d4-fb29-46e8-ae73-4c2645307d77
	I0718 21:10:29.017947    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:29.017951    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:29.017954    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:29.017957    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:29.017999    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:29 GMT
	I0718 21:10:29.018400    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:29.018747    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:29.018756    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:29.018764    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:29.018769    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:29.020133    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:29.020141    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:29.020148    5402 round_trippers.go:580]     Audit-Id: 25eb8f52-a76c-421b-8177-08516cc946d0
	I0718 21:10:29.020153    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:29.020158    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:29.020162    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:29.020165    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:29.020168    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:29 GMT
	I0718 21:10:29.020229    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:29.020397    5402 pod_ready.go:102] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"False"
	I0718 21:10:29.514185    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:29.514196    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:29.514205    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:29.514208    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:29.515844    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:29.515854    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:29.515860    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:29 GMT
	I0718 21:10:29.515862    5402 round_trippers.go:580]     Audit-Id: 7ce3860a-0ef8-49d0-af33-90506b97a3ba
	I0718 21:10:29.515865    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:29.515867    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:29.515870    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:29.515872    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:29.515966    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:29.516259    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:29.516267    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:29.516273    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:29.516276    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:29.517487    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:29.517496    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:29.517500    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:29.517503    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:29.517507    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:29.517509    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:29.517512    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:29 GMT
	I0718 21:10:29.517515    5402 round_trippers.go:580]     Audit-Id: 12584d93-9071-4ed6-b688-48b24c5ed3b3
	I0718 21:10:29.517756    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.013909    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:30.013960    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.013965    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.013968    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.015752    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.015762    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.015767    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.015771    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.015773    5402 round_trippers.go:580]     Audit-Id: 9be1232e-6674-47e1-95ab-c1bbe338685f
	I0718 21:10:30.015776    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.015778    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.015781    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.015836    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1153","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6837 chars]
	I0718 21:10:30.016118    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.016126    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.016131    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.016134    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.017505    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.017530    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.017551    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.017557    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.017559    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.017562    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.017565    5402 round_trippers.go:580]     Audit-Id: c27764f4-da70-4eef-b36d-83b0996c892b
	I0718 21:10:30.017568    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.017905    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.515137    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-76x8d
	I0718 21:10:30.515164    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.515177    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.515186    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.517792    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:30.517811    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.517819    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.517824    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.517828    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.517833    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.517837    5402 round_trippers.go:580]     Audit-Id: f9c4f12b-9a6b-48e7-bcd2-63e9393b6422
	I0718 21:10:30.517840    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.517936    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1304","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},"f:preferredDuringSchedulingIgnoredDuringExecution":
{}}},"f:containers":{"k:{\"name\":\"coredns\"}":{".":{},"f:args":{},"f: [truncated 6784 chars]
	I0718 21:10:30.518316    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.518326    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.518333    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.518337    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.519904    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.519913    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.519918    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.519921    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.519926    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.519928    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.519931    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.519934    5402 round_trippers.go:580]     Audit-Id: a0b93586-011f-4d75-a2aa-ab5d59412098
	I0718 21:10:30.520018    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.520211    5402 pod_ready.go:92] pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.520220    5402 pod_ready.go:81] duration metric: took 12.506446695s for pod "coredns-7db6d8ff4d-76x8d" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.520226    5402 pod_ready.go:78] waiting up to 6m0s for pod "etcd-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.520257    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-127000
	I0718 21:10:30.520262    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.520267    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.520271    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.521418    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.521428    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.521433    5402 round_trippers.go:580]     Audit-Id: 4f2db715-bf9a-4abd-80e1-937d58db2e88
	I0718 21:10:30.521436    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.521450    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.521455    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.521457    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.521484    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.521611    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"etcd-multinode-127000","namespace":"kube-system","uid":"4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88","resourceVersion":"1241","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"etcd","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/etcd.advertise-client-urls":"https://192.169.0.17:2379","kubernetes.io/config.hash":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.mirror":"dd34cf21994d39cf28d26460f62a29d2","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265078Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kubernetes.io/etcd.advertise-c
lient-urls":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config [truncated 6358 chars]
	I0718 21:10:30.521822    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.521828    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.521834    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.521837    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.523142    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.523149    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.523153    5402 round_trippers.go:580]     Audit-Id: 74420363-7ff4-4026-9014-8270e3825bb6
	I0718 21:10:30.523156    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.523159    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.523162    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.523164    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.523166    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.523484    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.523650    5402 pod_ready.go:92] pod "etcd-multinode-127000" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.523658    5402 pod_ready.go:81] duration metric: took 3.425685ms for pod "etcd-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.523668    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.523712    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-127000
	I0718 21:10:30.523717    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.523723    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.523727    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.524843    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.524852    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.524857    5402 round_trippers.go:580]     Audit-Id: d160e265-02ee-46fc-8d11-17a48e178963
	I0718 21:10:30.524860    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.524863    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.524866    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.524869    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.524872    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.525090    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-apiserver-multinode-127000","namespace":"kube-system","uid":"15bce3aa-75a4-4cca-beec-20a4eeed2c14","resourceVersion":"1272","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-apiserver","tier":"control-plane"},"annotations":{"kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint":"192.169.0.17:8443","kubernetes.io/config.hash":"adeddd763cb12ff26454c97d2cb34645","kubernetes.io/config.mirror":"adeddd763cb12ff26454c97d2cb34645","kubernetes.io/config.seen":"2024-07-19T04:02:50.143265837Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubeadm.kub
ernetes.io/kube-apiserver.advertise-address.endpoint":{},"f:kubernetes. [truncated 7892 chars]
	I0718 21:10:30.525320    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.525327    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.525332    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.525336    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.528297    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:30.528303    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.528308    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.528311    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.528313    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.528316    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.528319    5402 round_trippers.go:580]     Audit-Id: 4f60f004-b1db-4418-8637-dff6740cc8cb
	I0718 21:10:30.528322    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.528566    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.528728    5402 pod_ready.go:92] pod "kube-apiserver-multinode-127000" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.528735    5402 pod_ready.go:81] duration metric: took 5.06175ms for pod "kube-apiserver-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.528743    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.528773    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-127000
	I0718 21:10:30.528777    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.528784    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.528787    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.531302    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:30.531310    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.531315    5402 round_trippers.go:580]     Audit-Id: c19db514-5ec1-4c55-afda-e6884af87d1c
	I0718 21:10:30.531318    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.531322    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.531326    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.531328    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.531330    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.531621    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-controller-manager-multinode-127000","namespace":"kube-system","uid":"38250320-d12a-418f-867a-05a82f4f876c","resourceVersion":"1251","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-controller-manager","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"14d5cf7b26b6a66b49878f0b6b5873c6","kubernetes.io/config.mirror":"14d5cf7b26b6a66b49878f0b6b5873c6","kubernetes.io/config.seen":"2024-07-19T04:02:50.143266437Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.
io/config.seen":{},"f:kubernetes.io/config.source":{}},"f:labels":{".": [truncated 7467 chars]
	I0718 21:10:30.531867    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.531873    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.531878    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.531882    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.533311    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.533318    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.533323    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.533325    5402 round_trippers.go:580]     Audit-Id: a4ca03a9-9f48-462f-8995-b8453aa7ca09
	I0718 21:10:30.533328    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.533331    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.533333    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.533337    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.533455    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.533626    5402 pod_ready.go:92] pod "kube-controller-manager-multinode-127000" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.533633    5402 pod_ready.go:81] duration metric: took 4.885281ms for pod "kube-controller-manager-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.533646    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8j597" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.533672    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8j597
	I0718 21:10:30.533677    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.533682    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.533687    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.535055    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.535062    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.535067    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.535070    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.535073    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.535090    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.535095    5402 round_trippers.go:580]     Audit-Id: e0f8c5a0-3a97-4ccc-ab65-569fd2c0a88e
	I0718 21:10:30.535101    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.535416    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8j597","generateName":"kube-proxy-","namespace":"kube-system","uid":"51e85da8-2b18-4373-8f84-65ed52d6bc13","resourceVersion":"1162","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 6029 chars]
	I0718 21:10:30.535655    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:30.535662    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.535668    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.535670    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.536855    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:30.536862    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.536867    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.536876    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.536879    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.536882    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.536885    5402 round_trippers.go:580]     Audit-Id: 7989778c-5268-4207-9bcd-c3238442a1b7
	I0718 21:10:30.536887    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.536998    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:30.537165    5402 pod_ready.go:92] pod "kube-proxy-8j597" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:30.537172    5402 pod_ready.go:81] duration metric: took 3.521164ms for pod "kube-proxy-8j597" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.537183    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8nvff" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:30.716043    5402 request.go:629] Waited for 178.781715ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nvff
	I0718 21:10:30.716214    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8nvff
	I0718 21:10:30.716225    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.716235    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.716243    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.718855    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:30.718870    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.718881    5402 round_trippers.go:580]     Audit-Id: e89f1c54-25b7-4a8f-8797-cc1601990cde
	I0718 21:10:30.718889    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.718896    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.718901    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.718906    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.718914    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:30 GMT
	I0718 21:10:30.719090    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-8nvff","generateName":"kube-proxy-","namespace":"kube-system","uid":"4b740c91-be18-4bc8-9698-0b4fbda8695e","resourceVersion":"1110","creationTimestamp":"2024-07-19T04:04:35Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:04:35Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:
requiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k [truncated 5832 chars]
	I0718 21:10:30.916443    5402 request.go:629] Waited for 197.0029ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m03
	I0718 21:10:30.916560    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m03
	I0718 21:10:30.916571    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:30.916583    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:30.916593    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:30.919137    5402 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0718 21:10:30.919151    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:30.919158    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:30.919163    5402 round_trippers.go:580]     Audit-Id: 3e32e6ae-fa8a-41f5-b77a-e81a425989dd
	I0718 21:10:30.919188    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:30.919195    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:30.919198    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:30.919201    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:30.919205    5402 round_trippers.go:580]     Content-Length: 210
	I0718 21:10:30.919233    5402 request.go:1212] Response Body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"nodes \"multinode-127000-m03\" not found","reason":"NotFound","details":{"name":"multinode-127000-m03","kind":"nodes"},"code":404}
	I0718 21:10:30.919295    5402 pod_ready.go:97] node "multinode-127000-m03" hosting pod "kube-proxy-8nvff" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-127000-m03": nodes "multinode-127000-m03" not found
	I0718 21:10:30.919308    5402 pod_ready.go:81] duration metric: took 382.107283ms for pod "kube-proxy-8nvff" in "kube-system" namespace to be "Ready" ...
	E0718 21:10:30.919323    5402 pod_ready.go:66] WaitExtra: waitPodCondition: node "multinode-127000-m03" hosting pod "kube-proxy-8nvff" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "multinode-127000-m03": nodes "multinode-127000-m03" not found
	I0718 21:10:30.919331    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-nxf5m" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:31.116436    5402 request.go:629] Waited for 197.053347ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxf5m
	I0718 21:10:31.116635    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-proxy-nxf5m
	I0718 21:10:31.116647    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.116658    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.116666    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.119218    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:31.119232    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.119240    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.119243    5402 round_trippers.go:580]     Audit-Id: 3e0578f4-7a04-4fb9-88aa-2566ba6d076d
	I0718 21:10:31.119246    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.119251    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.119254    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.119257    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.119490    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-proxy-nxf5m","generateName":"kube-proxy-","namespace":"kube-system","uid":"e48c420f-b1a1-4a9e-bc7e-fa0d640e5764","resourceVersion":"993","creationTimestamp":"2024-07-19T04:03:47Z","labels":{"controller-revision-hash":"5bbc78d4f8","k8s-app":"kube-proxy","pod-template-generation":"1"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"DaemonSet","name":"kube-proxy","uid":"4f07a990-f65d-45d1-9766-77572b6fc4bb","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:47Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:controller-revision-hash":{},"f:k8s-app":{},"f:pod-template-generation":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"4f07a990-f65d-45d1-9766-77572b6fc4bb\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:nodeAffinity":{".":{},"f:r
equiredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers":{"k: [truncated 5831 chars]
	I0718 21:10:31.315922    5402 request.go:629] Waited for 196.097926ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m02
	I0718 21:10:31.316050    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000-m02
	I0718 21:10:31.316060    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.316070    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.316077    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.318688    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:31.318704    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.318711    5402 round_trippers.go:580]     Audit-Id: b7d79a4e-0f18-41da-b2a1-191205ade99f
	I0718 21:10:31.318715    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.318719    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.318722    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.318727    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.318733    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.318814    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000-m02","uid":"7e73463a-ae2d-4a9c-a2b8-e12809583e97","resourceVersion":"1019","creationTimestamp":"2024-07-19T04:07:31Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000-m02","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"false","minikube.k8s.io/updated_at":"2024_07_18T21_07_31_0700","minikube.k8s.io/version":"v1.33.1"},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubeadm","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:07:31Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{
"f:annotations":{"f:kubeadm.alpha.kubernetes.io/cri-socket":{}}}}},{"ma [truncated 3811 chars]
	I0718 21:10:31.319038    5402 pod_ready.go:92] pod "kube-proxy-nxf5m" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:31.319049    5402 pod_ready.go:81] duration metric: took 399.698182ms for pod "kube-proxy-nxf5m" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:31.319057    5402 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:31.516543    5402 request.go:629] Waited for 197.436038ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-127000
	I0718 21:10:31.516674    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-127000
	I0718 21:10:31.516688    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.516697    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.516703    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.519138    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:31.519151    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.519158    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.519162    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.519166    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.519168    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.519172    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.519177    5402 round_trippers.go:580]     Audit-Id: b965ef6e-9342-4a2e-9db6-5beadfcfd87b
	I0718 21:10:31.519355    5402 request.go:1212] Response Body: {"kind":"Pod","apiVersion":"v1","metadata":{"name":"kube-scheduler-multinode-127000","namespace":"kube-system","uid":"3060259c-364e-4c24-ae43-107cc1973705","resourceVersion":"1268","creationTimestamp":"2024-07-19T04:02:50Z","labels":{"component":"kube-scheduler","tier":"control-plane"},"annotations":{"kubernetes.io/config.hash":"746f7833447444339ca9b76cec94dc1f","kubernetes.io/config.mirror":"746f7833447444339ca9b76cec94dc1f","kubernetes.io/config.seen":"2024-07-19T04:02:50.143262549Z","kubernetes.io/config.source":"file"},"ownerReferences":[{"apiVersion":"v1","kind":"Node","name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","controller":true}],"managedFields":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:02:50Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:kubernetes.io/config.hash":{},"f:kubernetes.io/config.mirror":{},"f:kubernetes.io/config.seen":{}
,"f:kubernetes.io/config.source":{}},"f:labels":{".":{},"f:component":{ [truncated 5195 chars]
	I0718 21:10:31.715822    5402 request.go:629] Waited for 196.078642ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:31.715865    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes/multinode-127000
	I0718 21:10:31.715873    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.715882    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.715887    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.717353    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:31.717361    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.717365    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.717369    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.717371    5402 round_trippers.go:580]     Audit-Id: c2b7c130-13ea-4a55-8cb2-9153e81bf749
	I0718 21:10:31.717374    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.717377    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.717380    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.717586    5402 request.go:1212] Response Body: {"kind":"Node","apiVersion":"v1","metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFields":[{"manager":"kubelet","operation":"Update","ap
iVersion":"v1","time":"2024-07-19T04:02:48Z","fieldsType":"FieldsV1","f [truncated 5174 chars]
	I0718 21:10:31.717794    5402 pod_ready.go:92] pod "kube-scheduler-multinode-127000" in "kube-system" namespace has status "Ready":"True"
	I0718 21:10:31.717803    5402 pod_ready.go:81] duration metric: took 398.728219ms for pod "kube-scheduler-multinode-127000" in "kube-system" namespace to be "Ready" ...
	I0718 21:10:31.717810    5402 pod_ready.go:38] duration metric: took 13.713291891s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0718 21:10:31.717821    5402 api_server.go:52] waiting for apiserver process to appear ...
	I0718 21:10:31.717873    5402 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:10:31.729465    5402 command_runner.go:130] > 1628
	I0718 21:10:31.729708    5402 api_server.go:72] duration metric: took 30.963125586s to wait for apiserver process to appear ...
	I0718 21:10:31.729717    5402 api_server.go:88] waiting for apiserver healthz status ...
	I0718 21:10:31.729732    5402 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:10:31.733204    5402 api_server.go:279] https://192.169.0.17:8443/healthz returned 200:
	ok
	I0718 21:10:31.733233    5402 round_trippers.go:463] GET https://192.169.0.17:8443/version
	I0718 21:10:31.733238    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.733243    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.733247    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.733811    5402 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0718 21:10:31.733819    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.733824    5402 round_trippers.go:580]     Audit-Id: ef96c85b-1293-47b4-9817-f2eb7f350539
	I0718 21:10:31.733828    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.733831    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.733835    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.733837    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.733841    5402 round_trippers.go:580]     Content-Length: 263
	I0718 21:10:31.733845    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:31 GMT
	I0718 21:10:31.733857    5402 request.go:1212] Response Body: {
	  "major": "1",
	  "minor": "30",
	  "gitVersion": "v1.30.3",
	  "gitCommit": "6fc0a69044f1ac4c13841ec4391224a2df241460",
	  "gitTreeState": "clean",
	  "buildDate": "2024-07-16T23:48:12Z",
	  "goVersion": "go1.22.5",
	  "compiler": "gc",
	  "platform": "linux/amd64"
	}
	I0718 21:10:31.733878    5402 api_server.go:141] control plane version: v1.30.3
	I0718 21:10:31.733886    5402 api_server.go:131] duration metric: took 4.161354ms to wait for apiserver health ...
	I0718 21:10:31.733891    5402 system_pods.go:43] waiting for kube-system pods to appear ...
	I0718 21:10:31.916683    5402 request.go:629] Waited for 182.720249ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:31.916763    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:31.916775    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:31.916786    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:31.916794    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:31.920682    5402 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0718 21:10:31.920697    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:31.920704    5402 round_trippers.go:580]     Audit-Id: 1b3bf7b3-e4a9-4b19-b4fc-ab8057c6af44
	I0718 21:10:31.920708    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:31.920712    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:31.920716    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:31.920719    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:31.920723    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:32 GMT
	I0718 21:10:31.921862    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1311"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1304","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86411 chars]
	I0718 21:10:31.923756    5402 system_pods.go:59] 12 kube-system pods found
	I0718 21:10:31.923766    5402 system_pods.go:61] "coredns-7db6d8ff4d-76x8d" [55e9cca6-f3d6-4b2f-a8de-df91db8e186a] Running
	I0718 21:10:31.923770    5402 system_pods.go:61] "etcd-multinode-127000" [4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88] Running
	I0718 21:10:31.923782    5402 system_pods.go:61] "kindnet-28cb8" [f603b4ff-800e-40e6-9c53-20626c4dfd35] Running
	I0718 21:10:31.923786    5402 system_pods.go:61] "kindnet-ks8xk" [358f14a8-284b-4570-96d1-d519f18269fa] Running
	I0718 21:10:31.923789    5402 system_pods.go:61] "kindnet-lt5bk" [f81f29e6-917b-4347-ad73-aa9b51320b17] Running
	I0718 21:10:31.923792    5402 system_pods.go:61] "kube-apiserver-multinode-127000" [15bce3aa-75a4-4cca-beec-20a4eeed2c14] Running
	I0718 21:10:31.923795    5402 system_pods.go:61] "kube-controller-manager-multinode-127000" [38250320-d12a-418f-867a-05a82f4f876c] Running
	I0718 21:10:31.923798    5402 system_pods.go:61] "kube-proxy-8j597" [51e85da8-2b18-4373-8f84-65ed52d6bc13] Running
	I0718 21:10:31.923801    5402 system_pods.go:61] "kube-proxy-8nvff" [4b740c91-be18-4bc8-9698-0b4fbda8695e] Running
	I0718 21:10:31.923803    5402 system_pods.go:61] "kube-proxy-nxf5m" [e48c420f-b1a1-4a9e-bc7e-fa0d640e5764] Running
	I0718 21:10:31.923805    5402 system_pods.go:61] "kube-scheduler-multinode-127000" [3060259c-364e-4c24-ae43-107cc1973705] Running
	I0718 21:10:31.923809    5402 system_pods.go:61] "storage-provisioner" [cd072b88-33f2-4988-985a-f1a00f8eb449] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0718 21:10:31.923814    5402 system_pods.go:74] duration metric: took 189.912921ms to wait for pod list to return data ...
	I0718 21:10:31.923824    5402 default_sa.go:34] waiting for default service account to be created ...
	I0718 21:10:32.115683    5402 request.go:629] Waited for 191.790892ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/default/serviceaccounts
	I0718 21:10:32.115867    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/default/serviceaccounts
	I0718 21:10:32.115878    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:32.115889    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:32.115897    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:32.118782    5402 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0718 21:10:32.118795    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:32.118802    5402 round_trippers.go:580]     Audit-Id: bea34ab0-83a1-4c36-8484-ce99f2f99ef5
	I0718 21:10:32.118806    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:32.118810    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:32.118814    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:32.118817    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:32.118822    5402 round_trippers.go:580]     Content-Length: 262
	I0718 21:10:32.118825    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:32 GMT
	I0718 21:10:32.118850    5402 request.go:1212] Response Body: {"kind":"ServiceAccountList","apiVersion":"v1","metadata":{"resourceVersion":"1311"},"items":[{"metadata":{"name":"default","namespace":"default","uid":"5f5cf476-ad0a-497e-bf9b-7a00ccdfe7cb","resourceVersion":"307","creationTimestamp":"2024-07-19T04:03:03Z"}}]}
	I0718 21:10:32.118980    5402 default_sa.go:45] found service account: "default"
	I0718 21:10:32.118992    5402 default_sa.go:55] duration metric: took 195.156298ms for default service account to be created ...
	I0718 21:10:32.118999    5402 system_pods.go:116] waiting for k8s-apps to be running ...
	I0718 21:10:32.316459    5402 request.go:629] Waited for 197.411151ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:32.316624    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/namespaces/kube-system/pods
	I0718 21:10:32.316636    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:32.316648    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:32.316654    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:32.320966    5402 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0718 21:10:32.320993    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:32.321000    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:32.321005    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:32.321009    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:32 GMT
	I0718 21:10:32.321013    5402 round_trippers.go:580]     Audit-Id: 97783e68-54b0-4f9d-b7ea-fa80d5c4bcf2
	I0718 21:10:32.321017    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:32.321020    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:32.321862    5402 request.go:1212] Response Body: {"kind":"PodList","apiVersion":"v1","metadata":{"resourceVersion":"1311"},"items":[{"metadata":{"name":"coredns-7db6d8ff4d-76x8d","generateName":"coredns-7db6d8ff4d-","namespace":"kube-system","uid":"55e9cca6-f3d6-4b2f-a8de-df91db8e186a","resourceVersion":"1304","creationTimestamp":"2024-07-19T04:03:04Z","labels":{"k8s-app":"kube-dns","pod-template-hash":"7db6d8ff4d"},"ownerReferences":[{"apiVersion":"apps/v1","kind":"ReplicaSet","name":"coredns-7db6d8ff4d","uid":"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5","controller":true,"blockOwnerDeletion":true}],"managedFields":[{"manager":"kube-controller-manager","operation":"Update","apiVersion":"v1","time":"2024-07-19T04:03:04Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:generateName":{},"f:labels":{".":{},"f:k8s-app":{},"f:pod-template-hash":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"ba1c40f0-a5d4-4d4d-a13e-2ebb197b65d5\"}":{}}},"f:spec":{"f:affinity":{".":{},"f:podAntiAffinity":{".":{},
"f:preferredDuringSchedulingIgnoredDuringExecution":{}}},"f:containers" [truncated 86411 chars]
	I0718 21:10:32.323777    5402 system_pods.go:86] 12 kube-system pods found
	I0718 21:10:32.323786    5402 system_pods.go:89] "coredns-7db6d8ff4d-76x8d" [55e9cca6-f3d6-4b2f-a8de-df91db8e186a] Running
	I0718 21:10:32.323790    5402 system_pods.go:89] "etcd-multinode-127000" [4d4a84eb-c0c3-44f3-a515-e99b9ba8fe88] Running
	I0718 21:10:32.323793    5402 system_pods.go:89] "kindnet-28cb8" [f603b4ff-800e-40e6-9c53-20626c4dfd35] Running
	I0718 21:10:32.323797    5402 system_pods.go:89] "kindnet-ks8xk" [358f14a8-284b-4570-96d1-d519f18269fa] Running
	I0718 21:10:32.323801    5402 system_pods.go:89] "kindnet-lt5bk" [f81f29e6-917b-4347-ad73-aa9b51320b17] Running
	I0718 21:10:32.323804    5402 system_pods.go:89] "kube-apiserver-multinode-127000" [15bce3aa-75a4-4cca-beec-20a4eeed2c14] Running
	I0718 21:10:32.323807    5402 system_pods.go:89] "kube-controller-manager-multinode-127000" [38250320-d12a-418f-867a-05a82f4f876c] Running
	I0718 21:10:32.323810    5402 system_pods.go:89] "kube-proxy-8j597" [51e85da8-2b18-4373-8f84-65ed52d6bc13] Running
	I0718 21:10:32.323813    5402 system_pods.go:89] "kube-proxy-8nvff" [4b740c91-be18-4bc8-9698-0b4fbda8695e] Running
	I0718 21:10:32.323819    5402 system_pods.go:89] "kube-proxy-nxf5m" [e48c420f-b1a1-4a9e-bc7e-fa0d640e5764] Running
	I0718 21:10:32.323822    5402 system_pods.go:89] "kube-scheduler-multinode-127000" [3060259c-364e-4c24-ae43-107cc1973705] Running
	I0718 21:10:32.323828    5402 system_pods.go:89] "storage-provisioner" [cd072b88-33f2-4988-985a-f1a00f8eb449] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0718 21:10:32.323833    5402 system_pods.go:126] duration metric: took 204.823908ms to wait for k8s-apps to be running ...
	I0718 21:10:32.323841    5402 system_svc.go:44] waiting for kubelet service to be running ....
	I0718 21:10:32.323888    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 21:10:32.334816    5402 system_svc.go:56] duration metric: took 10.972843ms WaitForService to wait for kubelet
	I0718 21:10:32.334831    5402 kubeadm.go:582] duration metric: took 31.568232076s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:10:32.334849    5402 node_conditions.go:102] verifying NodePressure condition ...
	I0718 21:10:32.515263    5402 request.go:629] Waited for 180.343781ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.17:8443/api/v1/nodes
	I0718 21:10:32.515304    5402 round_trippers.go:463] GET https://192.169.0.17:8443/api/v1/nodes
	I0718 21:10:32.515309    5402 round_trippers.go:469] Request Headers:
	I0718 21:10:32.515314    5402 round_trippers.go:473]     Accept: application/json, */*
	I0718 21:10:32.515318    5402 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0718 21:10:32.516974    5402 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0718 21:10:32.516984    5402 round_trippers.go:577] Response Headers:
	I0718 21:10:32.516993    5402 round_trippers.go:580]     Cache-Control: no-cache, private
	I0718 21:10:32.516996    5402 round_trippers.go:580]     Content-Type: application/json
	I0718 21:10:32.517000    5402 round_trippers.go:580]     X-Kubernetes-Pf-Flowschema-Uid: 38a5eb9e-0a46-4c94-8c6f-fc5bd2760437
	I0718 21:10:32.517003    5402 round_trippers.go:580]     X-Kubernetes-Pf-Prioritylevel-Uid: ac648951-384f-44d3-b39c-9955cd96e51e
	I0718 21:10:32.517006    5402 round_trippers.go:580]     Date: Fri, 19 Jul 2024 04:10:32 GMT
	I0718 21:10:32.517010    5402 round_trippers.go:580]     Audit-Id: 5d131641-1a51-4ec3-a8dc-ea2e9525673c
	I0718 21:10:32.517127    5402 request.go:1212] Response Body: {"kind":"NodeList","apiVersion":"v1","metadata":{"resourceVersion":"1311"},"items":[{"metadata":{"name":"multinode-127000","uid":"10f1a142-8408-4762-bc11-07417f3744ca","resourceVersion":"1284","creationTimestamp":"2024-07-19T04:02:48Z","labels":{"beta.kubernetes.io/arch":"amd64","beta.kubernetes.io/os":"linux","kubernetes.io/arch":"amd64","kubernetes.io/hostname":"multinode-127000","kubernetes.io/os":"linux","minikube.k8s.io/commit":"c5c4003e63e14c031fd1d49a13aab215383064db","minikube.k8s.io/name":"multinode-127000","minikube.k8s.io/primary":"true","minikube.k8s.io/updated_at":"2024_07_18T21_02_51_0700","minikube.k8s.io/version":"v1.33.1","node-role.kubernetes.io/control-plane":"","node.kubernetes.io/exclude-from-external-load-balancers":""},"annotations":{"kubeadm.alpha.kubernetes.io/cri-socket":"unix:///var/run/cri-dockerd.sock","node.alpha.kubernetes.io/ttl":"0","volumes.kubernetes.io/controller-managed-attach-detach":"true"},"managedFi
elds":[{"manager":"kubelet","operation":"Update","apiVersion":"v1","tim [truncated 10031 chars]
	I0718 21:10:32.517446    5402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 21:10:32.517454    5402 node_conditions.go:123] node cpu capacity is 2
	I0718 21:10:32.517461    5402 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0718 21:10:32.517466    5402 node_conditions.go:123] node cpu capacity is 2
	I0718 21:10:32.517470    5402 node_conditions.go:105] duration metric: took 182.610645ms to run NodePressure ...
	I0718 21:10:32.517477    5402 start.go:241] waiting for startup goroutines ...
	I0718 21:10:32.517483    5402 start.go:246] waiting for cluster config update ...
	I0718 21:10:32.517489    5402 start.go:255] writing updated cluster config ...
	I0718 21:10:32.541277    5402 out.go:177] 
	I0718 21:10:32.563564    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:10:32.563691    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:10:32.586168    5402 out.go:177] * Starting "multinode-127000-m02" worker node in "multinode-127000" cluster
	I0718 21:10:32.628997    5402 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:10:32.629023    5402 cache.go:56] Caching tarball of preloaded images
	I0718 21:10:32.629165    5402 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:10:32.629178    5402 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:10:32.629263    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:10:32.629871    5402 start.go:360] acquireMachinesLock for multinode-127000-m02: {Name:mk8a0ac4b11cd5d9eba5ac8b9ae33317742f9112 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:10:32.629936    5402 start.go:364] duration metric: took 48.568µs to acquireMachinesLock for "multinode-127000-m02"
	I0718 21:10:32.629954    5402 start.go:96] Skipping create...Using existing machine configuration
	I0718 21:10:32.629960    5402 fix.go:54] fixHost starting: m02
	I0718 21:10:32.630261    5402 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:10:32.630278    5402 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:10:32.639255    5402 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53451
	I0718 21:10:32.639596    5402 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:10:32.639981    5402 main.go:141] libmachine: Using API Version  1
	I0718 21:10:32.639998    5402 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:10:32.640248    5402 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:10:32.640378    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:10:32.640498    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetState
	I0718 21:10:32.640605    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:10:32.640673    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid from json: 5340
	I0718 21:10:32.641600    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid 5340 missing from process table
	I0718 21:10:32.641631    5402 fix.go:112] recreateIfNeeded on multinode-127000-m02: state=Stopped err=<nil>
	I0718 21:10:32.641642    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	W0718 21:10:32.641719    5402 fix.go:138] unexpected machine state, will restart: <nil>
	I0718 21:10:32.663372    5402 out.go:177] * Restarting existing hyperkit VM for "multinode-127000-m02" ...
	I0718 21:10:32.704966    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .Start
	I0718 21:10:32.705130    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:10:32.705157    5402 main.go:141] libmachine: (multinode-127000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/hyperkit.pid
	I0718 21:10:32.706118    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid 5340 missing from process table
	I0718 21:10:32.706129    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | pid 5340 is in state "Stopped"
	I0718 21:10:32.706139    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/hyperkit.pid...
	I0718 21:10:32.706374    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Using UUID e9cb8dfe-c218-475a-adda-766363901a8e
	I0718 21:10:32.731699    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Generated MAC 3a:d2:59:42:45:2c
	I0718 21:10:32.731721    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000
	I0718 21:10:32.731872    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e9cb8dfe-c218-475a-adda-766363901a8e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0718 21:10:32.731913    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"e9cb8dfe-c218-475a-adda-766363901a8e", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003bec00)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:""
, process:(*os.Process)(nil)}
	I0718 21:10:32.731961    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "e9cb8dfe-c218-475a-adda-766363901a8e", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/multinode-127000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/bzimage,/Users/j
enkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000"}
	I0718 21:10:32.732011    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U e9cb8dfe-c218-475a-adda-766363901a8e -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/multinode-127000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/bzimage,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/mult
inode-127000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=multinode-127000"
	I0718 21:10:32.732023    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0718 21:10:32.733410    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 DEBUG: hyperkit: Pid is 5426
	I0718 21:10:32.733827    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Attempt 0
	I0718 21:10:32.733846    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:10:32.733915    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid from json: 5426
	I0718 21:10:32.735712    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Searching for 3a:d2:59:42:45:2c in /var/db/dhcpd_leases ...
	I0718 21:10:32.735800    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Found 18 entries in /var/db/dhcpd_leases!
	I0718 21:10:32.735814    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:d2:e2:11:67:74:1c ID:1,d2:e2:11:67:74:1c Lease:0x669b386d}
	I0718 21:10:32.735824    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:4c:de:4f:d8:27 ID:1,6:4c:de:4f:d8:27 Lease:0x6699e6d1}
	I0718 21:10:32.735831    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:d2:59:42:45:2c ID:1,3a:d2:59:42:45:2c Lease:0x669b37f6}
	I0718 21:10:32.735839    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | Found match: 3a:d2:59:42:45:2c
	I0718 21:10:32.735848    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | IP: 192.169.0.18
	I0718 21:10:32.735905    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetConfigRaw
	I0718 21:10:32.741609    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0718 21:10:32.758154    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetIP
	I0718 21:10:32.758555    5402 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/multinode-127000/config.json ...
	I0718 21:10:32.759275    5402 machine.go:94] provisionDockerMachine start ...
	I0718 21:10:32.759310    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:10:32.759463    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:10:32.759587    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:10:32.759722    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:10:32.759872    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:10:32.760011    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:10:32.760185    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:10:32.760410    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:10:32.760421    5402 main.go:141] libmachine: About to run SSH command:
	hostname
	I0718 21:10:32.767227    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0718 21:10:32.768518    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:10:32.768559    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:10:32.768587    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:10:32.768604    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:32 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:10:33.149383    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0718 21:10:33.149398    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0718 21:10:33.264080    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:10:33.264101    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:10:33.264122    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:10:33.264133    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:10:33.264925    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0718 21:10:33.264936    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:33 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0718 21:10:38.544157    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:38 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0718 21:10:38.544248    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:38 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0718 21:10:38.544256    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:38 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0718 21:10:38.568627    5402 main.go:141] libmachine: (multinode-127000-m02) DBG | 2024/07/18 21:10:38 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0718 21:11:07.824870    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0718 21:11:07.824885    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetMachineName
	I0718 21:11:07.825005    5402 buildroot.go:166] provisioning hostname "multinode-127000-m02"
	I0718 21:11:07.825016    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetMachineName
	I0718 21:11:07.825103    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:07.825194    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:07.825288    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:07.825382    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:07.825463    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:07.825643    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:07.825825    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:07.825834    5402 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-127000-m02 && echo "multinode-127000-m02" | sudo tee /etc/hostname
	I0718 21:11:07.888187    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-127000-m02
	
	I0718 21:11:07.888202    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:07.888332    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:07.888430    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:07.888520    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:07.888631    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:07.888752    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:07.888897    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:07.888909    5402 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-127000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-127000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-127000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 21:11:07.947668    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:11:07.947684    5402 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1411/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1411/.minikube}
	I0718 21:11:07.947698    5402 buildroot.go:174] setting up certificates
	I0718 21:11:07.947704    5402 provision.go:84] configureAuth start
	I0718 21:11:07.947712    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetMachineName
	I0718 21:11:07.947850    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetIP
	I0718 21:11:07.947947    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:07.948028    5402 provision.go:143] copyHostCerts
	I0718 21:11:07.948055    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem
	I0718 21:11:07.948119    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem, removing ...
	I0718 21:11:07.948125    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem
	I0718 21:11:07.948300    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem (1123 bytes)
	I0718 21:11:07.948505    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem
	I0718 21:11:07.948551    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem, removing ...
	I0718 21:11:07.948556    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem
	I0718 21:11:07.948644    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem (1675 bytes)
	I0718 21:11:07.948785    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem
	I0718 21:11:07.948825    5402 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem, removing ...
	I0718 21:11:07.948830    5402 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem
	I0718 21:11:07.948909    5402 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem (1082 bytes)
	I0718 21:11:07.949049    5402 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem org=jenkins.multinode-127000-m02 san=[127.0.0.1 192.169.0.18 localhost minikube multinode-127000-m02]
	I0718 21:11:08.143659    5402 provision.go:177] copyRemoteCerts
	I0718 21:11:08.143717    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 21:11:08.143744    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:08.143882    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:08.143967    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.144061    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:08.144155    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/id_rsa Username:docker}
	I0718 21:11:08.176532    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0718 21:11:08.176606    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 21:11:08.195955    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0718 21:11:08.196020    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem --> /etc/docker/server.pem (1229 bytes)
	I0718 21:11:08.215815    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0718 21:11:08.215883    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 21:11:08.235156    5402 provision.go:87] duration metric: took 287.432646ms to configureAuth
	I0718 21:11:08.235178    5402 buildroot.go:189] setting minikube options for container-runtime
	I0718 21:11:08.235377    5402 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:11:08.235407    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:08.235539    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:08.235628    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:08.235718    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.235809    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.235899    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:08.236024    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:08.236155    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:08.236163    5402 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 21:11:08.288129    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 21:11:08.288142    5402 buildroot.go:70] root file system type: tmpfs
	I0718 21:11:08.288216    5402 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 21:11:08.288227    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:08.288361    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:08.288460    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.288564    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.288655    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:08.288801    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:08.288943    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:08.288985    5402 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.17"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 21:11:08.350933    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.17
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 21:11:08.350948    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:08.351082    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:08.351174    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.351259    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:08.351365    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:08.351490    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:08.351632    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:08.351644    5402 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 21:11:09.927444    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 21:11:09.927468    5402 machine.go:97] duration metric: took 37.167077876s to provisionDockerMachine
	I0718 21:11:09.927475    5402 start.go:293] postStartSetup for "multinode-127000-m02" (driver="hyperkit")
	I0718 21:11:09.927485    5402 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 21:11:09.927497    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:09.927694    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 21:11:09.927706    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:09.927800    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:09.927889    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:09.927977    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:09.928059    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/id_rsa Username:docker}
	I0718 21:11:09.961907    5402 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 21:11:09.964865    5402 command_runner.go:130] > NAME=Buildroot
	I0718 21:11:09.964873    5402 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0718 21:11:09.964877    5402 command_runner.go:130] > ID=buildroot
	I0718 21:11:09.964881    5402 command_runner.go:130] > VERSION_ID=2023.02.9
	I0718 21:11:09.964885    5402 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0718 21:11:09.965015    5402 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 21:11:09.965025    5402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/addons for local assets ...
	I0718 21:11:09.965128    5402 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/files for local assets ...
	I0718 21:11:09.965315    5402 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> 19482.pem in /etc/ssl/certs
	I0718 21:11:09.965322    5402 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> /etc/ssl/certs/19482.pem
	I0718 21:11:09.965532    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 21:11:09.973514    5402 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem --> /etc/ssl/certs/19482.pem (1708 bytes)
	I0718 21:11:09.992309    5402 start.go:296] duration metric: took 64.820952ms for postStartSetup
	I0718 21:11:09.992329    5402 fix.go:56] duration metric: took 37.361260324s for fixHost
	I0718 21:11:09.992345    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:09.992477    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:09.992556    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:09.992651    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:09.992749    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:09.992862    5402 main.go:141] libmachine: Using SSH client type: native
	I0718 21:11:09.993002    5402 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x32c70c0] 0x32c9e20 <nil>  [] 0s} 192.169.0.18 22 <nil> <nil>}
	I0718 21:11:09.993012    5402 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0718 21:11:10.045137    5402 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721362269.886236030
	
	I0718 21:11:10.045150    5402 fix.go:216] guest clock: 1721362269.886236030
	I0718 21:11:10.045155    5402 fix.go:229] Guest: 2024-07-18 21:11:09.88623603 -0700 PDT Remote: 2024-07-18 21:11:09.992335 -0700 PDT m=+122.291588156 (delta=-106.09897ms)
	I0718 21:11:10.045169    5402 fix.go:200] guest clock delta is within tolerance: -106.09897ms
	I0718 21:11:10.045174    5402 start.go:83] releasing machines lock for "multinode-127000-m02", held for 37.414119601s
	I0718 21:11:10.045191    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:10.045325    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetIP
	I0718 21:11:10.069576    5402 out.go:177] * Found network options:
	I0718 21:11:10.089556    5402 out.go:177]   - NO_PROXY=192.169.0.17
	W0718 21:11:10.110680    5402 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 21:11:10.110717    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:10.111529    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:10.111754    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:11:10.111841    5402 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 21:11:10.111879    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	W0718 21:11:10.111988    5402 proxy.go:119] fail to check proxy env: Error ip not in block
	I0718 21:11:10.112102    5402 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0718 21:11:10.112121    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:11:10.112172    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:10.112255    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:11:10.112309    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:10.112405    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:11:10.112477    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:10.112582    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/id_rsa Username:docker}
	I0718 21:11:10.112596    5402 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:11:10.112701    5402 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/id_rsa Username:docker}
	I0718 21:11:10.141918    5402 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0718 21:11:10.142038    5402 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 21:11:10.142104    5402 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0718 21:11:10.188167    5402 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0718 21:11:10.188996    5402 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0718 21:11:10.189024    5402 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 21:11:10.189039    5402 start.go:495] detecting cgroup driver to use...
	I0718 21:11:10.189152    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:11:10.204600    5402 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0718 21:11:10.204850    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 21:11:10.213829    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 21:11:10.222757    5402 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 21:11:10.222812    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 21:11:10.231817    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:11:10.240739    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 21:11:10.249684    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:11:10.258638    5402 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 21:11:10.268013    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 21:11:10.276993    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 21:11:10.285910    5402 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 21:11:10.295134    5402 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 21:11:10.303190    5402 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0718 21:11:10.303340    5402 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 21:11:10.311494    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:11:10.408459    5402 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 21:11:10.426650    5402 start.go:495] detecting cgroup driver to use...
	I0718 21:11:10.426721    5402 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 21:11:10.447527    5402 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0718 21:11:10.449131    5402 command_runner.go:130] > [Unit]
	I0718 21:11:10.449142    5402 command_runner.go:130] > Description=Docker Application Container Engine
	I0718 21:11:10.449154    5402 command_runner.go:130] > Documentation=https://docs.docker.com
	I0718 21:11:10.449162    5402 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0718 21:11:10.449169    5402 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0718 21:11:10.449173    5402 command_runner.go:130] > StartLimitBurst=3
	I0718 21:11:10.449177    5402 command_runner.go:130] > StartLimitIntervalSec=60
	I0718 21:11:10.449181    5402 command_runner.go:130] > [Service]
	I0718 21:11:10.449184    5402 command_runner.go:130] > Type=notify
	I0718 21:11:10.449188    5402 command_runner.go:130] > Restart=on-failure
	I0718 21:11:10.449191    5402 command_runner.go:130] > Environment=NO_PROXY=192.169.0.17
	I0718 21:11:10.449198    5402 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0718 21:11:10.449207    5402 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0718 21:11:10.449213    5402 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0718 21:11:10.449219    5402 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0718 21:11:10.449225    5402 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0718 21:11:10.449231    5402 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0718 21:11:10.449237    5402 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0718 21:11:10.449250    5402 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0718 21:11:10.449256    5402 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0718 21:11:10.449259    5402 command_runner.go:130] > ExecStart=
	I0718 21:11:10.449274    5402 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	I0718 21:11:10.449283    5402 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0718 21:11:10.449293    5402 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0718 21:11:10.449298    5402 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0718 21:11:10.449304    5402 command_runner.go:130] > LimitNOFILE=infinity
	I0718 21:11:10.449307    5402 command_runner.go:130] > LimitNPROC=infinity
	I0718 21:11:10.449311    5402 command_runner.go:130] > LimitCORE=infinity
	I0718 21:11:10.449315    5402 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0718 21:11:10.449321    5402 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0718 21:11:10.449325    5402 command_runner.go:130] > TasksMax=infinity
	I0718 21:11:10.449331    5402 command_runner.go:130] > TimeoutStartSec=0
	I0718 21:11:10.449337    5402 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0718 21:11:10.449340    5402 command_runner.go:130] > Delegate=yes
	I0718 21:11:10.449344    5402 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0718 21:11:10.449351    5402 command_runner.go:130] > KillMode=process
	I0718 21:11:10.449355    5402 command_runner.go:130] > [Install]
	I0718 21:11:10.449359    5402 command_runner.go:130] > WantedBy=multi-user.target
	I0718 21:11:10.449470    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:11:10.461265    5402 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 21:11:10.482926    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:11:10.493342    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:11:10.509204    5402 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 21:11:10.530540    5402 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:11:10.541206    5402 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:11:10.556963    5402 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0718 21:11:10.557323    5402 ssh_runner.go:195] Run: which cri-dockerd
	I0718 21:11:10.560174    5402 command_runner.go:130] > /usr/bin/cri-dockerd
	I0718 21:11:10.560356    5402 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 21:11:10.567648    5402 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 21:11:10.581315    5402 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 21:11:10.676062    5402 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 21:11:10.790035    5402 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 21:11:10.790064    5402 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 21:11:10.804358    5402 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:11:10.894374    5402 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:12:11.742801    5402 command_runner.go:130] ! Job for docker.service failed because the control process exited with error code.
	I0718 21:12:11.742816    5402 command_runner.go:130] ! See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	I0718 21:12:11.742825    5402 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m0.846631847s)
	I0718 21:12:11.742882    5402 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0718 21:12:11.751825    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0718 21:12:11.751838    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.484482329Z" level=info msg="Starting up"
	I0718 21:12:11.751846    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485167981Z" level=info msg="containerd not running, starting managed containerd"
	I0718 21:12:11.751859    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485801332Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	I0718 21:12:11.751868    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.502013303Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	I0718 21:12:11.751878    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.516958766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	I0718 21:12:11.751894    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517038396Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	I0718 21:12:11.751903    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517155084Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	I0718 21:12:11.751914    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517197264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.751927    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517355966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.751940    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517457389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.751965    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517585479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.751975    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517625666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.751985    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517656936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.751995    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517688957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.752005    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517881945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.752015    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.518099672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.752029    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519645604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.752039    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519696927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	I0718 21:12:11.752060    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519828049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	I0718 21:12:11.752071    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519870517Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	I0718 21:12:11.752080    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520040351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	I0718 21:12:11.752088    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520089814Z" level=info msg="metadata content store policy set" policy=shared
	I0718 21:12:11.752097    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522003436Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	I0718 21:12:11.752107    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522064725Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	I0718 21:12:11.752115    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522104055Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	I0718 21:12:11.752126    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522136906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	I0718 21:12:11.752135    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522168404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	I0718 21:12:11.752143    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522233548Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	I0718 21:12:11.752152    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522448512Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	I0718 21:12:11.752163    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522530201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	I0718 21:12:11.752172    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522566421Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	I0718 21:12:11.752182    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522596662Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	I0718 21:12:11.752191    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522630885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752201    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522660955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752211    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522697431Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752226    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522732084Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752237    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522762824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752245    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522792209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752287    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522821157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752299    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522848962Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	I0718 21:12:11.752312    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522945935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752320    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522982209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752329    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523011791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752338    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523044426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752347    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523073991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752356    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523102957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752365    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523131966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752373    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523160366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752381    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523189181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752392    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523228786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752401    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523261112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752409    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523289795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752418    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523320625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752427    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523355398Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	I0718 21:12:11.752435    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523391561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752444    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523421174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752453    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523448613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	I0718 21:12:11.752463    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523523187Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	I0718 21:12:11.752474    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523566449Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	I0718 21:12:11.752484    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523596740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	I0718 21:12:11.752638    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523625735Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	I0718 21:12:11.752651    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523653333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	I0718 21:12:11.752660    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523681797Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	I0718 21:12:11.752668    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523709460Z" level=info msg="NRI interface is disabled by configuration."
	I0718 21:12:11.752677    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523910253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	I0718 21:12:11.752685    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523995611Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	I0718 21:12:11.752693    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524058018Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	I0718 21:12:11.752704    5402 command_runner.go:130] > Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524093051Z" level=info msg="containerd successfully booted in 0.022782s"
	I0718 21:12:11.752712    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.507162701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	I0718 21:12:11.752719    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.519725545Z" level=info msg="Loading containers: start."
	I0718 21:12:11.752739    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.625326434Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	I0718 21:12:11.752751    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.687949447Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	I0718 21:12:11.752763    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736564365Z" level=warning msg="error locating sandbox id aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df: sandbox aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df not found"
	I0718 21:12:11.752771    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736780148Z" level=info msg="Loading containers: done."
	I0718 21:12:11.752780    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744186186Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	I0718 21:12:11.752788    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744371679Z" level=info msg="Daemon has completed initialization"
	I0718 21:12:11.752796    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.766998398Z" level=info msg="API listen on /var/run/docker.sock"
	I0718 21:12:11.752803    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.767075299Z" level=info msg="API listen on [::]:2376"
	I0718 21:12:11.752808    5402 command_runner.go:130] > Jul 19 04:11:09 multinode-127000-m02 systemd[1]: Started Docker Application Container Engine.
	I0718 21:12:11.752815    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.768722806Z" level=info msg="Processing signal 'terminated'"
	I0718 21:12:11.752825    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.769547405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	I0718 21:12:11.752833    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 systemd[1]: Stopping Docker Application Container Engine...
	I0718 21:12:11.752841    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770121951Z" level=info msg="Daemon shutdown complete"
	I0718 21:12:11.752879    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770184908Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	I0718 21:12:11.752888    5402 command_runner.go:130] > Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770198671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	I0718 21:12:11.752896    5402 command_runner.go:130] > Jul 19 04:11:11 multinode-127000-m02 systemd[1]: docker.service: Deactivated successfully.
	I0718 21:12:11.752902    5402 command_runner.go:130] > Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Stopped Docker Application Container Engine.
	I0718 21:12:11.752908    5402 command_runner.go:130] > Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	I0718 21:12:11.752914    5402 command_runner.go:130] > Jul 19 04:11:11 multinode-127000-m02 dockerd[847]: time="2024-07-19T04:11:11.807768811Z" level=info msg="Starting up"
	I0718 21:12:11.752923    5402 command_runner.go:130] > Jul 19 04:12:11 multinode-127000-m02 dockerd[847]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	I0718 21:12:11.752931    5402 command_runner.go:130] > Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	I0718 21:12:11.752938    5402 command_runner.go:130] > Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	I0718 21:12:11.752943    5402 command_runner.go:130] > Jul 19 04:12:11 multinode-127000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	I0718 21:12:11.777591    5402 out.go:177] 
	W0718 21:12:11.798090    5402 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 04:11:08 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.484482329Z" level=info msg="Starting up"
	Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485167981Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 04:11:08 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:08.485801332Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.502013303Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.516958766Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517038396Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517155084Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517197264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517355966Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517457389Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517585479Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517625666Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517656936Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517688957Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.517881945Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.518099672Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519645604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519696927Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519828049Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.519870517Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520040351Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.520089814Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522003436Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522064725Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522104055Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522136906Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522168404Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522233548Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522448512Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522530201Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522566421Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522596662Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522630885Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522660955Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522697431Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522732084Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522762824Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522792209Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522821157Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522848962Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522945935Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.522982209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523011791Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523044426Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523073991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523102957Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523131966Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523160366Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523189181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523228786Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523261112Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523289795Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523320625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523355398Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523391561Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523421174Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523448613Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523523187Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523566449Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523596740Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523625735Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523653333Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523681797Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523709460Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523910253Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.523995611Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524058018Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 04:11:08 multinode-127000-m02 dockerd[519]: time="2024-07-19T04:11:08.524093051Z" level=info msg="containerd successfully booted in 0.022782s"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.507162701Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.519725545Z" level=info msg="Loading containers: start."
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.625326434Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.687949447Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736564365Z" level=warning msg="error locating sandbox id aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df: sandbox aa61e63897fba10e81bbeedbce44590b2b7c0a112dd320b80ba533d1869ed2df not found"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.736780148Z" level=info msg="Loading containers: done."
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744186186Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.744371679Z" level=info msg="Daemon has completed initialization"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.766998398Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 04:11:09 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:09.767075299Z" level=info msg="API listen on [::]:2376"
	Jul 19 04:11:09 multinode-127000-m02 systemd[1]: Started Docker Application Container Engine.
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.768722806Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.769547405Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 04:11:10 multinode-127000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770121951Z" level=info msg="Daemon shutdown complete"
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770184908Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 04:11:10 multinode-127000-m02 dockerd[512]: time="2024-07-19T04:11:10.770198671Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 04:11:11 multinode-127000-m02 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:11:11 multinode-127000-m02 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:11:11 multinode-127000-m02 dockerd[847]: time="2024-07-19T04:11:11.807768811Z" level=info msg="Starting up"
	Jul 19 04:12:11 multinode-127000-m02 dockerd[847]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:12:11 multinode-127000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:12:11 multinode-127000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0718 21:12:11.798193    5402 out.go:239] * 
	W0718 21:12:11.799426    5402 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:12:11.861206    5402 out.go:177] 
	
	
	==> Docker <==
	Jul 19 04:10:28 multinode-127000 dockerd[854]: time="2024-07-19T04:10:28.596382167Z" level=info msg="shim disconnected" id=f2524461cc22c316565bc9886051ccaf67633435391eb3e4d53f94b844be346b namespace=moby
	Jul 19 04:10:28 multinode-127000 dockerd[854]: time="2024-07-19T04:10:28.596585054Z" level=warning msg="cleaning up after shim disconnected" id=f2524461cc22c316565bc9886051ccaf67633435391eb3e4d53f94b844be346b namespace=moby
	Jul 19 04:10:28 multinode-127000 dockerd[854]: time="2024-07-19T04:10:28.596627373Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.398691384Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.398759200Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.398768806Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.398834331Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:10:29 multinode-127000 cri-dockerd[1101]: time="2024-07-19T04:10:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0264f23b3b3142dafbe3cf8ea2dc810595d4a3ee4806dc90b923900524460068/resolv.conf as [nameserver 192.169.0.1]"
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.600868286Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.600920061Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.600932278Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.600996882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.728803029Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.729043009Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.729143296Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.736456448Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:10:29 multinode-127000 cri-dockerd[1101]: time="2024-07-19T04:10:29Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0a89bb0fa24e6f9e34eaa5883066a7a96e8b59e4dbd9500ccec1a0192ee50713/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.928743520Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.928844002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.928881038Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:10:29 multinode-127000 dockerd[854]: time="2024-07-19T04:10:29.929430467Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:10:39 multinode-127000 dockerd[854]: time="2024-07-19T04:10:39.677477396Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 19 04:10:39 multinode-127000 dockerd[854]: time="2024-07-19T04:10:39.677560692Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 19 04:10:39 multinode-127000 dockerd[854]: time="2024-07-19T04:10:39.677570298Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 19 04:10:39 multinode-127000 dockerd[854]: time="2024-07-19T04:10:39.678046672Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ea69e0eeb1bb4       6e38f40d628db       About a minute ago   Running             storage-provisioner       4                   e3179b3df5704       storage-provisioner
	d1cc0a0783743       8c811b4aec35f       About a minute ago   Running             busybox                   2                   0a89bb0fa24e6       busybox-fc5497c4f-zzsc5
	828d7c57103f5       cbb01a7bd410d       About a minute ago   Running             coredns                   2                   0264f23b3b314       coredns-7db6d8ff4d-76x8d
	378a349103bfc       5cc3abe5717db       2 minutes ago        Running             kindnet-cni               2                   594294fcd8545       kindnet-lt5bk
	f2524461cc22c       6e38f40d628db       2 minutes ago        Exited              storage-provisioner       3                   e3179b3df5704       storage-provisioner
	a0862f7c58ffa       55bb025d2cfa5       2 minutes ago        Running             kube-proxy                2                   5dfd587241ce0       kube-proxy-8j597
	048c9d222f525       3861cfcd7c04c       2 minutes ago        Running             etcd                      2                   66c57824d26c6       etcd-multinode-127000
	7ffc364d40288       1f6d574d502f3       2 minutes ago        Running             kube-apiserver            2                   8cf86b5b5119a       kube-apiserver-multinode-127000
	08265d497be9a       76932a3b37d7e       2 minutes ago        Running             kube-controller-manager   2                   4e1f06a857b1e       kube-controller-manager-multinode-127000
	a27425c0a86d1       3edc18e7b7672       2 minutes ago        Running             kube-scheduler            2                   c0204ba94f71f       kube-scheduler-multinode-127000
	d31c525af0d11       8c811b4aec35f       5 minutes ago        Exited              busybox                   1                   2735e2da184d2       busybox-fc5497c4f-zzsc5
	1368162a8f097       cbb01a7bd410d       5 minutes ago        Exited              coredns                   1                   dff6311790e5d       coredns-7db6d8ff4d-76x8d
	d4dc52db5a777       5cc3abe5717db       5 minutes ago        Exited              kindnet-cni               1                   fda4eb3809799       kindnet-lt5bk
	e12c9aa28fc6f       55bb025d2cfa5       5 minutes ago        Exited              kube-proxy                1                   f3dc8d1aa9186       kube-proxy-8j597
	f8a7e04c5c8e5       76932a3b37d7e       5 minutes ago        Exited              kube-controller-manager   1                   1acc8e66b837b       kube-controller-manager-multinode-127000
	35aa60e7a3f82       1f6d574d502f3       5 minutes ago        Exited              kube-apiserver            1                   ac73727fe7776       kube-apiserver-multinode-127000
	539be4bab7a76       3861cfcd7c04c       5 minutes ago        Exited              etcd                      1                   579e883db8c68       etcd-multinode-127000
	26653cd0d581a       3edc18e7b7672       5 minutes ago        Exited              kube-scheduler            1                   a77ea521ac99d       kube-scheduler-multinode-127000
	
	
	==> coredns [1368162a8f09] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:38186 - 7015 "HINFO IN 8367240292770748147.1335734406026330032. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.028077752s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [828d7c57103f] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:34569 - 23258 "HINFO IN 5240652719464758658.5202411009245834723. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.010009389s
	
	
	==> describe nodes <==
	Name:               multinode-127000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-127000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-127000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_18T21_02_51_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:02:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-127000
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:12:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Fri, 19 Jul 2024 04:10:17 +0000   Fri, 19 Jul 2024 04:02:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Fri, 19 Jul 2024 04:10:17 +0000   Fri, 19 Jul 2024 04:02:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Fri, 19 Jul 2024 04:10:17 +0000   Fri, 19 Jul 2024 04:02:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Fri, 19 Jul 2024 04:10:17 +0000   Fri, 19 Jul 2024 04:10:17 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.17
	  Hostname:    multinode-127000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 8676986a6e4445519bb4e45330ac3dda
	  System UUID:                21704d79-0000-0000-a7e1-5094631d4682
	  Boot ID:                    66eaff67-8bc8-46ad-a7ba-dae1b2093c43
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-zzsc5                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m1s
	  kube-system                 coredns-7db6d8ff4d-76x8d                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m9s
	  kube-system                 etcd-multinode-127000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m23s
	  kube-system                 kindnet-lt5bk                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m9s
	  kube-system                 kube-apiserver-multinode-127000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-controller-manager-multinode-127000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 kube-proxy-8j597                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	  kube-system                 kube-scheduler-multinode-127000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m23s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m8s                   kube-proxy       
	  Normal  Starting                 2m14s                  kube-proxy       
	  Normal  Starting                 5m38s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m23s                  kubelet          Node multinode-127000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  9m23s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    9m23s                  kubelet          Node multinode-127000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m23s                  kubelet          Node multinode-127000 status is now: NodeHasSufficientPID
	  Normal  Starting                 9m23s                  kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m10s                  node-controller  Node multinode-127000 event: Registered Node multinode-127000 in Controller
	  Normal  NodeReady                8m50s                  kubelet          Node multinode-127000 status is now: NodeReady
	  Normal  NodeHasNoDiskPressure    5m43s (x8 over 5m43s)  kubelet          Node multinode-127000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m43s (x8 over 5m43s)  kubelet          Node multinode-127000 status is now: NodeHasSufficientMemory
	  Normal  Starting                 5m43s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     5m43s (x7 over 5m43s)  kubelet          Node multinode-127000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m43s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           5m26s                  node-controller  Node multinode-127000 event: Registered Node multinode-127000 in Controller
	  Normal  Starting                 2m22s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m22s (x8 over 2m22s)  kubelet          Node multinode-127000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m22s (x8 over 2m22s)  kubelet          Node multinode-127000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m22s (x7 over 2m22s)  kubelet          Node multinode-127000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m4s                   node-controller  Node multinode-127000 event: Registered Node multinode-127000 in Controller
	
	
	Name:               multinode-127000-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-127000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c5c4003e63e14c031fd1d49a13aab215383064db
	                    minikube.k8s.io/name=multinode-127000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_18T21_07_31_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Fri, 19 Jul 2024 04:07:31 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-127000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Fri, 19 Jul 2024 04:08:43 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Fri, 19 Jul 2024 04:07:46 +0000   Fri, 19 Jul 2024 04:10:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Fri, 19 Jul 2024 04:07:46 +0000   Fri, 19 Jul 2024 04:10:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Fri, 19 Jul 2024 04:07:46 +0000   Fri, 19 Jul 2024 04:10:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Fri, 19 Jul 2024 04:07:46 +0000   Fri, 19 Jul 2024 04:10:49 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.18
	  Hostname:    multinode-127000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 4c7b90f3c783403b9e44d76057797449
	  System UUID:                e9cb475a-0000-0000-adda-766363901a8e
	  Boot ID:                    5fa87ff8-a0ad-478e-b3e0-384d3cff1b09
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-vwc4b    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m46s
	  kube-system                 kindnet-ks8xk              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      8m26s
	  kube-system                 kube-proxy-nxf5m           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         8m26s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 8m18s                  kube-proxy       
	  Normal  Starting                 4m40s                  kube-proxy       
	  Normal  NodeHasNoDiskPressure    8m26s (x2 over 8m26s)  kubelet          Node multinode-127000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     8m26s (x2 over 8m26s)  kubelet          Node multinode-127000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  8m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  8m26s (x2 over 8m26s)  kubelet          Node multinode-127000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                8m3s                   kubelet          Node multinode-127000-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  4m42s (x2 over 4m42s)  kubelet          Node multinode-127000-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m42s (x2 over 4m42s)  kubelet          Node multinode-127000-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m42s (x2 over 4m42s)  kubelet          Node multinode-127000-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  4m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                4m27s                  kubelet          Node multinode-127000-m02 status is now: NodeReady
	  Normal  RegisteredNode           2m4s                   node-controller  Node multinode-127000-m02 event: Registered Node multinode-127000-m02 in Controller
	  Normal  NodeNotReady             84s                    node-controller  Node multinode-127000-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.008098] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.374021] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007260] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.610523] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.246408] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.873732] systemd-fstab-generator[492]: Ignoring "noauto" option for root device
	[  +0.111687] systemd-fstab-generator[504]: Ignoring "noauto" option for root device
	[  +1.857845] systemd-fstab-generator[775]: Ignoring "noauto" option for root device
	[  +0.253260] systemd-fstab-generator[813]: Ignoring "noauto" option for root device
	[  +0.115716] systemd-fstab-generator[826]: Ignoring "noauto" option for root device
	[  +0.111380] systemd-fstab-generator[840]: Ignoring "noauto" option for root device
	[  +2.481992] systemd-fstab-generator[1054]: Ignoring "noauto" option for root device
	[  +0.097558] systemd-fstab-generator[1066]: Ignoring "noauto" option for root device
	[  +0.060227] kauditd_printk_skb: 217 callbacks suppressed
	[  +0.052363] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +0.128218] systemd-fstab-generator[1093]: Ignoring "noauto" option for root device
	[  +0.401739] systemd-fstab-generator[1215]: Ignoring "noauto" option for root device
	[  +1.738900] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[  +6.618893] kauditd_printk_skb: 150 callbacks suppressed
	[Jul19 04:10] systemd-fstab-generator[2177]: Ignoring "noauto" option for root device
	[  +8.757499] kauditd_printk_skb: 70 callbacks suppressed
	[ +29.986807] kauditd_printk_skb: 15 callbacks suppressed
	
	
	==> etcd [048c9d222f52] <==
	{"level":"info","ts":"2024-07-19T04:09:54.529026Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T04:09:54.529072Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-07-19T04:09:54.529379Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 switched to configuration voters=(10721840317054556153)"}
	{"level":"info","ts":"2024-07-19T04:09:54.529533Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"894e36b18b81a269","local-member-id":"94cba0f62c7f97f9","added-peer-id":"94cba0f62c7f97f9","added-peer-peer-urls":["https://192.169.0.17:2380"]}
	{"level":"info","ts":"2024-07-19T04:09:54.529718Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"894e36b18b81a269","local-member-id":"94cba0f62c7f97f9","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:09:54.529903Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-07-19T04:09:54.532328Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-07-19T04:09:54.532666Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"94cba0f62c7f97f9","initial-advertise-peer-urls":["https://192.169.0.17:2380"],"listen-peer-urls":["https://192.169.0.17:2380"],"advertise-client-urls":["https://192.169.0.17:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.17:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-07-19T04:09:54.532748Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T04:09:54.533006Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.17:2380"}
	{"level":"info","ts":"2024-07-19T04:09:54.533033Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.17:2380"}
	{"level":"info","ts":"2024-07-19T04:09:56.03218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-19T04:09:56.032331Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-19T04:09:56.032404Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 received MsgPreVoteResp from 94cba0f62c7f97f9 at term 3"}
	{"level":"info","ts":"2024-07-19T04:09:56.032445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 became candidate at term 4"}
	{"level":"info","ts":"2024-07-19T04:09:56.032472Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 received MsgVoteResp from 94cba0f62c7f97f9 at term 4"}
	{"level":"info","ts":"2024-07-19T04:09:56.032501Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 became leader at term 4"}
	{"level":"info","ts":"2024-07-19T04:09:56.032528Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 94cba0f62c7f97f9 elected leader 94cba0f62c7f97f9 at term 4"}
	{"level":"info","ts":"2024-07-19T04:09:56.033958Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"94cba0f62c7f97f9","local-member-attributes":"{Name:multinode-127000 ClientURLs:[https://192.169.0.17:2379]}","request-path":"/0/members/94cba0f62c7f97f9/attributes","cluster-id":"894e36b18b81a269","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T04:09:56.034242Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:09:56.034547Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:09:56.036662Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T04:09:56.037069Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T04:09:56.037124Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T04:09:56.038248Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.17:2379"}
	
	
	==> etcd [539be4bab7a7] <==
	{"level":"info","ts":"2024-07-19T04:06:31.593476Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-07-19T04:06:33.479036Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 is starting a new election at term 2"}
	{"level":"info","ts":"2024-07-19T04:06:33.47914Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-07-19T04:06:33.479176Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 received MsgPreVoteResp from 94cba0f62c7f97f9 at term 2"}
	{"level":"info","ts":"2024-07-19T04:06:33.479198Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 became candidate at term 3"}
	{"level":"info","ts":"2024-07-19T04:06:33.479206Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 received MsgVoteResp from 94cba0f62c7f97f9 at term 3"}
	{"level":"info","ts":"2024-07-19T04:06:33.479218Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"94cba0f62c7f97f9 became leader at term 3"}
	{"level":"info","ts":"2024-07-19T04:06:33.479267Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 94cba0f62c7f97f9 elected leader 94cba0f62c7f97f9 at term 3"}
	{"level":"info","ts":"2024-07-19T04:06:33.48061Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:06:33.480663Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"94cba0f62c7f97f9","local-member-attributes":"{Name:multinode-127000 ClientURLs:[https://192.169.0.17:2379]}","request-path":"/0/members/94cba0f62c7f97f9/attributes","cluster-id":"894e36b18b81a269","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-19T04:06:33.480687Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-19T04:06:33.480892Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-19T04:06:33.481851Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-19T04:06:33.4837Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.17:2379"}
	{"level":"info","ts":"2024-07-19T04:06:33.48372Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-07-19T04:08:59.61261Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-07-19T04:08:59.612649Z","caller":"embed/etcd.go:375","msg":"closing etcd server","name":"multinode-127000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.17:2380"],"advertise-client-urls":["https://192.169.0.17:2379"]}
	{"level":"warn","ts":"2024-07-19T04:08:59.61274Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.17:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:08:59.61276Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.17:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:08:59.612801Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-19T04:08:59.612843Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-19T04:08:59.631967Z","caller":"etcdserver/server.go:1471","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"94cba0f62c7f97f9","current-leader-member-id":"94cba0f62c7f97f9"}
	{"level":"info","ts":"2024-07-19T04:08:59.633896Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.17:2380"}
	{"level":"info","ts":"2024-07-19T04:08:59.634029Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.17:2380"}
	{"level":"info","ts":"2024-07-19T04:08:59.634038Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"multinode-127000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.17:2380"],"advertise-client-urls":["https://192.169.0.17:2379"]}
	
	
	==> kernel <==
	 04:12:13 up 3 min,  0 users,  load average: 0.22, 0.20, 0.08
	Linux multinode-127000 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [378a349103bf] <==
	I0719 04:11:09.583993       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	I0719 04:11:19.589124       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:11:19.589165       1 main.go:303] handling current node
	I0719 04:11:19.589176       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:11:19.589180       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	I0719 04:11:29.588873       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:11:29.589004       1 main.go:303] handling current node
	I0719 04:11:29.589045       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:11:29.589071       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	I0719 04:11:39.587235       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:11:39.587254       1 main.go:303] handling current node
	I0719 04:11:39.587263       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:11:39.587267       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	I0719 04:11:49.589547       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:11:49.589653       1 main.go:303] handling current node
	I0719 04:11:49.589672       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:11:49.589764       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	I0719 04:11:59.580541       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:11:59.580572       1 main.go:303] handling current node
	I0719 04:11:59.580588       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:11:59.580595       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	I0719 04:12:09.581658       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:12:09.581844       1 main.go:303] handling current node
	I0719 04:12:09.582197       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:12:09.582468       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [d4dc52db5a77] <==
	I0719 04:08:16.466822       1 main.go:299] Handling node with IPs: map[192.169.0.19:{}]
	I0719 04:08:16.466880       1 main.go:326] Node multinode-127000-m03 has CIDR [10.244.3.0/24] 
	I0719 04:08:26.459014       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:08:26.459212       1 main.go:303] handling current node
	I0719 04:08:26.459264       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:08:26.459280       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	I0719 04:08:26.459395       1 main.go:299] Handling node with IPs: map[192.169.0.19:{}]
	I0719 04:08:26.459805       1 main.go:326] Node multinode-127000-m03 has CIDR [10.244.3.0/24] 
	I0719 04:08:36.457258       1 main.go:299] Handling node with IPs: map[192.169.0.19:{}]
	I0719 04:08:36.457725       1 main.go:326] Node multinode-127000-m03 has CIDR [10.244.2.0/24] 
	I0719 04:08:36.458247       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.19 Flags: [] Table: 0} 
	I0719 04:08:36.458733       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:08:36.458956       1 main.go:303] handling current node
	I0719 04:08:36.459078       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:08:36.459248       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	I0719 04:08:46.457818       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:08:46.457949       1 main.go:303] handling current node
	I0719 04:08:46.457988       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:08:46.458013       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	I0719 04:08:46.458309       1 main.go:299] Handling node with IPs: map[192.169.0.19:{}]
	I0719 04:08:46.458609       1 main.go:326] Node multinode-127000-m03 has CIDR [10.244.2.0/24] 
	I0719 04:08:56.457238       1 main.go:299] Handling node with IPs: map[192.169.0.17:{}]
	I0719 04:08:56.457653       1 main.go:303] handling current node
	I0719 04:08:56.457754       1 main.go:299] Handling node with IPs: map[192.169.0.18:{}]
	I0719 04:08:56.457786       1 main.go:326] Node multinode-127000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [35aa60e7a3f8] <==
	W0719 04:08:59.623563       1 logging.go:59] [core] [Channel #106 SubChannel #107] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623583       1 logging.go:59] [core] [Channel #166 SubChannel #167] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623602       1 logging.go:59] [core] [Channel #127 SubChannel #128] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623622       1 logging.go:59] [core] [Channel #181 SubChannel #182] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623642       1 logging.go:59] [core] [Channel #133 SubChannel #134] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623663       1 logging.go:59] [core] [Channel #130 SubChannel #131] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623683       1 logging.go:59] [core] [Channel #15 SubChannel #16] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623702       1 logging.go:59] [core] [Channel #55 SubChannel #56] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623721       1 logging.go:59] [core] [Channel #175 SubChannel #176] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623741       1 logging.go:59] [core] [Channel #94 SubChannel #95] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623761       1 logging.go:59] [core] [Channel #85 SubChannel #86] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623787       1 logging.go:59] [core] [Channel #91 SubChannel #92] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623811       1 logging.go:59] [core] [Channel #97 SubChannel #98] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623836       1 logging.go:59] [core] [Channel #52 SubChannel #53] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623858       1 logging.go:59] [core] [Channel #163 SubChannel #164] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623881       1 logging.go:59] [core] [Channel #154 SubChannel #155] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.623904       1 logging.go:59] [core] [Channel #73 SubChannel #74] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.628369       1 logging.go:59] [core] [Channel #43 SubChannel #44] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.628470       1 logging.go:59] [core] [Channel #151 SubChannel #152] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.628516       1 logging.go:59] [core] [Channel #100 SubChannel #101] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.628544       1 logging.go:59] [core] [Channel #142 SubChannel #143] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.628566       1 logging.go:59] [core] [Channel #160 SubChannel #161] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.628774       1 logging.go:59] [core] [Channel #58 SubChannel #59] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.628820       1 logging.go:59] [core] [Channel #88 SubChannel #89] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0719 04:08:59.628865       1 logging.go:59] [core] [Channel #136 SubChannel #137] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [7ffc364d4028] <==
	I0719 04:09:56.962544       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0719 04:09:56.962665       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0719 04:09:56.962797       1 shared_informer.go:320] Caches are synced for configmaps
	I0719 04:09:56.962908       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0719 04:09:56.962985       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0719 04:09:56.963348       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0719 04:09:56.967161       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	E0719 04:09:56.968874       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0719 04:09:56.970180       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0719 04:09:56.970215       1 aggregator.go:165] initial CRD sync complete...
	I0719 04:09:56.970221       1 autoregister_controller.go:141] Starting autoregister controller
	I0719 04:09:56.970225       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0719 04:09:56.970228       1 cache.go:39] Caches are synced for autoregister controller
	I0719 04:09:56.972230       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0719 04:09:56.974550       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0719 04:09:56.974665       1 policy_source.go:224] refreshing policies
	I0719 04:09:57.039225       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0719 04:09:57.866799       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0719 04:09:58.824807       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0719 04:09:58.943983       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0719 04:09:58.984805       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0719 04:09:59.049425       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0719 04:09:59.057730       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0719 04:10:09.729257       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0719 04:10:09.764930       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [08265d497be9] <==
	I0719 04:10:09.782889       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="56.31µs"
	I0719 04:10:09.807890       1 shared_informer.go:320] Caches are synced for stateful set
	I0719 04:10:09.854021       1 shared_informer.go:320] Caches are synced for attach detach
	I0719 04:10:09.910446       1 shared_informer.go:320] Caches are synced for disruption
	I0719 04:10:09.934346       1 shared_informer.go:320] Caches are synced for cronjob
	I0719 04:10:09.950083       1 shared_informer.go:320] Caches are synced for deployment
	I0719 04:10:09.984798       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 04:10:10.007332       1 shared_informer.go:320] Caches are synced for TTL after finished
	I0719 04:10:10.015365       1 shared_informer.go:320] Caches are synced for resource quota
	I0719 04:10:10.017332       1 shared_informer.go:320] Caches are synced for job
	I0719 04:10:10.397593       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 04:10:10.431239       1 shared_informer.go:320] Caches are synced for garbage collector
	I0719 04:10:10.431274       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0719 04:10:17.609457       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127000-m02"
	I0719 04:10:30.258965       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="32.269µs"
	I0719 04:10:30.287229       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="7.838891ms"
	I0719 04:10:30.288725       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.05µs"
	I0719 04:10:30.292588       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.313459ms"
	I0719 04:10:30.292724       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="24.73µs"
	I0719 04:10:49.761753       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="4.265098ms"
	I0719 04:10:49.762139       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="95.346µs"
	I0719 04:10:49.779904       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-28cb8"
	I0719 04:10:49.791669       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-28cb8"
	I0719 04:10:49.791739       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8nvff"
	I0719 04:10:49.805280       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-8nvff"
	
	
	==> kube-controller-manager [f8a7e04c5c8e] <==
	I0719 04:07:27.399477       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127000-m02"
	I0719 04:07:27.453665       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.991496ms"
	I0719 04:07:27.453736       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.903µs"
	I0719 04:07:27.555907       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="11.582132ms"
	I0719 04:07:27.562263       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.301309ms"
	I0719 04:07:27.569880       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="7.261819ms"
	I0719 04:07:27.569945       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.369µs"
	I0719 04:07:31.652395       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-127000-m02\" does not exist"
	I0719 04:07:31.665445       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-127000-m02" podCIDRs=["10.244.1.0/24"]
	I0719 04:07:32.540298       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.335µs"
	I0719 04:07:46.816030       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127000-m02"
	I0719 04:07:46.829302       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.191µs"
	I0719 04:07:56.567846       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.455µs"
	I0719 04:07:56.572319       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="30.627µs"
	I0719 04:07:56.581994       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.693µs"
	I0719 04:07:56.715535       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="33.4µs"
	I0719 04:07:56.717663       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="23.61µs"
	I0719 04:07:57.724628       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.668004ms"
	I0719 04:07:57.725138       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="55.979µs"
	I0719 04:08:31.221995       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127000-m02"
	I0719 04:08:32.166309       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-127000-m03\" does not exist"
	I0719 04:08:32.167588       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127000-m02"
	I0719 04:08:32.179995       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-127000-m03" podCIDRs=["10.244.2.0/24"]
	I0719 04:08:45.279981       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127000-m02"
	I0719 04:08:48.314710       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-127000-m02"
	
	
	==> kube-proxy [a0862f7c58ff] <==
	I0719 04:09:58.366350       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:09:58.398382       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.17"]
	I0719 04:09:58.526444       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:09:58.526658       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:09:58.526848       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:09:58.533849       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:09:58.534310       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:09:58.534341       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:09:58.537876       1 config.go:192] "Starting service config controller"
	I0719 04:09:58.537936       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:09:58.537992       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:09:58.537998       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:09:58.544224       1 config.go:319] "Starting node config controller"
	I0719 04:09:58.544234       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:09:58.638581       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:09:58.638652       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:09:58.644352       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [e12c9aa28fc6] <==
	I0719 04:06:35.249660       1 server_linux.go:69] "Using iptables proxy"
	I0719 04:06:35.270815       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.17"]
	I0719 04:06:35.319851       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0719 04:06:35.319894       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0719 04:06:35.319908       1 server_linux.go:165] "Using iptables Proxier"
	I0719 04:06:35.322940       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0719 04:06:35.323135       1 server.go:872] "Version info" version="v1.30.3"
	I0719 04:06:35.323144       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:06:35.324892       1 config.go:192] "Starting service config controller"
	I0719 04:06:35.325203       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0719 04:06:35.325266       1 config.go:101] "Starting endpoint slice config controller"
	I0719 04:06:35.325271       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0719 04:06:35.325922       1 config.go:319] "Starting node config controller"
	I0719 04:06:35.325947       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0719 04:06:35.425745       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0719 04:06:35.425869       1 shared_informer.go:320] Caches are synced for service config
	I0719 04:06:35.426132       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [26653cd0d581] <==
	I0719 04:06:32.049097       1 serving.go:380] Generated self-signed cert in-memory
	W0719 04:06:34.379612       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 04:06:34.379648       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 04:06:34.379657       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 04:06:34.379843       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 04:06:34.412788       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 04:06:34.412982       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:06:34.415028       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 04:06:34.415247       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 04:06:34.415325       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 04:06:34.415339       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 04:06:34.515774       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 04:08:59.668797       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0719 04:08:59.668895       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	E0719 04:08:59.669059       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [a27425c0a86d] <==
	I0719 04:09:53.503608       1 serving.go:380] Generated self-signed cert in-memory
	W0719 04:09:56.898509       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0719 04:09:56.898546       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0719 04:09:56.898554       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0719 04:09:56.898558       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0719 04:09:56.943001       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.30.3"
	I0719 04:09:56.943037       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0719 04:09:56.948696       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0719 04:09:56.948806       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0719 04:09:56.948861       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0719 04:09:56.948852       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0719 04:09:57.049611       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 19 04:10:11 multinode-127000 kubelet[1355]: E0719 04:10:11.638833    1355 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-zzsc5" podUID="32f91bdf-1574-413d-a130-4d067e648d6b"
	Jul 19 04:10:11 multinode-127000 kubelet[1355]: E0719 04:10:11.683254    1355 kubelet.go:2909] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Jul 19 04:10:13 multinode-127000 kubelet[1355]: E0719 04:10:13.270881    1355 configmap.go:199] Couldn't get configMap kube-system/coredns: object "kube-system"/"coredns" not registered
	Jul 19 04:10:13 multinode-127000 kubelet[1355]: E0719 04:10:13.271228    1355 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/55e9cca6-f3d6-4b2f-a8de-df91db8e186a-config-volume podName:55e9cca6-f3d6-4b2f-a8de-df91db8e186a nodeName:}" failed. No retries permitted until 2024-07-19 04:10:29.271217286 +0000 UTC m=+37.796077295 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "config-volume" (UniqueName: "kubernetes.io/configmap/55e9cca6-f3d6-4b2f-a8de-df91db8e186a-config-volume") pod "coredns-7db6d8ff4d-76x8d" (UID: "55e9cca6-f3d6-4b2f-a8de-df91db8e186a") : object "kube-system"/"coredns" not registered
	Jul 19 04:10:13 multinode-127000 kubelet[1355]: E0719 04:10:13.372060    1355 projected.go:294] Couldn't get configMap default/kube-root-ca.crt: object "default"/"kube-root-ca.crt" not registered
	Jul 19 04:10:13 multinode-127000 kubelet[1355]: E0719 04:10:13.372269    1355 projected.go:200] Error preparing data for projected volume kube-api-access-4jnvm for pod default/busybox-fc5497c4f-zzsc5: object "default"/"kube-root-ca.crt" not registered
	Jul 19 04:10:13 multinode-127000 kubelet[1355]: E0719 04:10:13.372420    1355 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/32f91bdf-1574-413d-a130-4d067e648d6b-kube-api-access-4jnvm podName:32f91bdf-1574-413d-a130-4d067e648d6b nodeName:}" failed. No retries permitted until 2024-07-19 04:10:29.372400308 +0000 UTC m=+37.897260336 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-4jnvm" (UniqueName: "kubernetes.io/projected/32f91bdf-1574-413d-a130-4d067e648d6b-kube-api-access-4jnvm") pod "busybox-fc5497c4f-zzsc5" (UID: "32f91bdf-1574-413d-a130-4d067e648d6b") : object "default"/"kube-root-ca.crt" not registered
	Jul 19 04:10:13 multinode-127000 kubelet[1355]: E0719 04:10:13.638233    1355 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-76x8d" podUID="55e9cca6-f3d6-4b2f-a8de-df91db8e186a"
	Jul 19 04:10:13 multinode-127000 kubelet[1355]: E0719 04:10:13.638950    1355 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-zzsc5" podUID="32f91bdf-1574-413d-a130-4d067e648d6b"
	Jul 19 04:10:15 multinode-127000 kubelet[1355]: E0719 04:10:15.637773    1355 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-fc5497c4f-zzsc5" podUID="32f91bdf-1574-413d-a130-4d067e648d6b"
	Jul 19 04:10:15 multinode-127000 kubelet[1355]: E0719 04:10:15.638285    1355 pod_workers.go:1298] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-7db6d8ff4d-76x8d" podUID="55e9cca6-f3d6-4b2f-a8de-df91db8e186a"
	Jul 19 04:10:29 multinode-127000 kubelet[1355]: I0719 04:10:29.232577    1355 scope.go:117] "RemoveContainer" containerID="6396364b3e0e9c88164375097bcb54491bdfb5ae4bf0f3c40756bc7d31795682"
	Jul 19 04:10:29 multinode-127000 kubelet[1355]: I0719 04:10:29.232761    1355 scope.go:117] "RemoveContainer" containerID="f2524461cc22c316565bc9886051ccaf67633435391eb3e4d53f94b844be346b"
	Jul 19 04:10:29 multinode-127000 kubelet[1355]: E0719 04:10:29.232883    1355 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(cd072b88-33f2-4988-985a-f1a00f8eb449)\"" pod="kube-system/storage-provisioner" podUID="cd072b88-33f2-4988-985a-f1a00f8eb449"
	Jul 19 04:10:39 multinode-127000 kubelet[1355]: I0719 04:10:39.638603    1355 scope.go:117] "RemoveContainer" containerID="f2524461cc22c316565bc9886051ccaf67633435391eb3e4d53f94b844be346b"
	Jul 19 04:10:51 multinode-127000 kubelet[1355]: E0719 04:10:51.661934    1355 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:10:51 multinode-127000 kubelet[1355]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:10:51 multinode-127000 kubelet[1355]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:10:51 multinode-127000 kubelet[1355]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:10:51 multinode-127000 kubelet[1355]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 19 04:11:51 multinode-127000 kubelet[1355]: E0719 04:11:51.661569    1355 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 19 04:11:51 multinode-127000 kubelet[1355]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 19 04:11:51 multinode-127000 kubelet[1355]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 19 04:11:51 multinode-127000 kubelet[1355]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 19 04:11:51 multinode-127000 kubelet[1355]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p multinode-127000 -n multinode-127000
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-127000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartMultiNode (187.78s)

                                                
                                    
x
+
TestScheduledStopUnix (81.14s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-228000 --memory=2048 --driver=hyperkit 
E0718 21:16:13.848599    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p scheduled-stop-228000 --memory=2048 --driver=hyperkit : exit status 90 (1m15.74833866s)

                                                
                                                
-- stdout --
	* [scheduled-stop-228000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-228000" primary control-plane node in "scheduled-stop-228000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 04:16:21 scheduled-stop-228000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:21.104935412Z" level=info msg="Starting up"
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:21.105391564Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:21.105945979Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.122528423Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137569553Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137639220Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137752481Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137791508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137869671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137906727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138051900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138091930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138124105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138159281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138240640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138412324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.139976498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140029297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140159662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140214777Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140325803Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140403245Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.142840043Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.142925286Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.142971637Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143006966Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143040044Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143129152Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143358242Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143473744Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143512998Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143544814Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143578981Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143611197Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143641132Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143671749Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143703319Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143739553Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143773151Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143842433Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143884464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143917939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143954012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143990202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144023446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144055012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144086952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144117095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144147987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144179305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144208397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144239565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144269399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144301462Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144340055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144378231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144411009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144484040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144526997Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144558202Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144588745Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144618311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144647700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144676270Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144873369Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144963732Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.145024628Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.145060770Z" level=info msg="containerd successfully booted in 0.023230s"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.127218883Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.136047705Z" level=info msg="Loading containers: start."
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.225652379Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.305521746Z" level=info msg="Loading containers: done."
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.318199010Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.318290507Z" level=info msg="Daemon has completed initialization"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.344907665Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.345002263Z" level=info msg="API listen on [::]:2376"
	Jul 19 04:16:22 scheduled-stop-228000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.308862328Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:16:23 scheduled-stop-228000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.310251397Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.310349647Z" level=info msg="Daemon shutdown complete"
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.310444348Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.310487929Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 04:16:24 scheduled-stop-228000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 04:16:24 scheduled-stop-228000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:16:24 scheduled-stop-228000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:16:24 scheduled-stop-228000 dockerd[915]: time="2024-07-19T04:16:24.357941173Z" level=info msg="Starting up"
	Jul 19 04:17:24 scheduled-stop-228000 dockerd[915]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:17:24 scheduled-stop-228000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:17:24 scheduled-stop-228000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:17:24 scheduled-stop-228000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
scheduled_stop_test.go:130: starting minikube: exit status 90

                                                
                                                
-- stdout --
	* [scheduled-stop-228000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "scheduled-stop-228000" primary control-plane node in "scheduled-stop-228000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 04:16:21 scheduled-stop-228000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:21.104935412Z" level=info msg="Starting up"
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:21.105391564Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:21.105945979Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=519
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.122528423Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137569553Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137639220Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137752481Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137791508Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137869671Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.137906727Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138051900Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138091930Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138124105Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138159281Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138240640Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.138412324Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.139976498Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140029297Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140159662Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140214777Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140325803Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.140403245Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.142840043Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.142925286Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.142971637Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143006966Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143040044Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143129152Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143358242Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143473744Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143512998Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143544814Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143578981Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143611197Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143641132Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143671749Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143703319Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143739553Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143773151Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143842433Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143884464Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143917939Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143954012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.143990202Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144023446Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144055012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144086952Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144117095Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144147987Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144179305Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144208397Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144239565Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144269399Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144301462Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144340055Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144378231Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144411009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144484040Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144526997Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144558202Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144588745Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144618311Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144647700Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144676270Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144873369Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.144963732Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.145024628Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 04:16:21 scheduled-stop-228000 dockerd[519]: time="2024-07-19T04:16:21.145060770Z" level=info msg="containerd successfully booted in 0.023230s"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.127218883Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.136047705Z" level=info msg="Loading containers: start."
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.225652379Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.305521746Z" level=info msg="Loading containers: done."
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.318199010Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.318290507Z" level=info msg="Daemon has completed initialization"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.344907665Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 04:16:22 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:22.345002263Z" level=info msg="API listen on [::]:2376"
	Jul 19 04:16:22 scheduled-stop-228000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.308862328Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:16:23 scheduled-stop-228000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.310251397Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.310349647Z" level=info msg="Daemon shutdown complete"
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.310444348Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 04:16:23 scheduled-stop-228000 dockerd[512]: time="2024-07-19T04:16:23.310487929Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 04:16:24 scheduled-stop-228000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 04:16:24 scheduled-stop-228000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:16:24 scheduled-stop-228000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:16:24 scheduled-stop-228000 dockerd[915]: time="2024-07-19T04:16:24.357941173Z" level=info msg="Starting up"
	Jul 19 04:17:24 scheduled-stop-228000 dockerd[915]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:17:24 scheduled-stop-228000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:17:24 scheduled-stop-228000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:17:24 scheduled-stop-228000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
panic.go:626: *** TestScheduledStopUnix FAILED at 2024-07-18 21:17:24.48023 -0700 PDT m=+3137.397615060
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-228000 -n scheduled-stop-228000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-228000 -n scheduled-stop-228000: exit status 6 (154.520468ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:17:24.622254    5679 status.go:417] kubeconfig endpoint: get endpoint: "scheduled-stop-228000" does not appear in /Users/jenkins/minikube-integration/19302-1411/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "scheduled-stop-228000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "scheduled-stop-228000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-228000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p scheduled-stop-228000: (5.240220099s)
--- FAIL: TestScheduledStopUnix (81.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (78.28s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-347000 --driver=hyperkit 
no_kubernetes_test.go:191: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-347000 --driver=hyperkit : exit status 90 (1m18.129868437s)

                                                
                                                
-- stdout --
	* [NoKubernetes-347000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting minikube without Kubernetes in cluster NoKubernetes-347000
	* Restarting existing hyperkit VM for "NoKubernetes-347000" ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 04:28:55 NoKubernetes-347000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:55.413847896Z" level=info msg="Starting up"
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:55.414359723Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:55.416156935Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=511
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.431364700Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447012056Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447112240Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447203947Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447250699Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447393652Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447436560Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447580256Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447620072Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447653061Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447685815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.447806168Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.448046466Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.449687257Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.449736006Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.449871691Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.449914660Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.450023281Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.450081000Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.450482251Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.450588628Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.450636240Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.450733812Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.450776263Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.450840797Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451065957Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451179756Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451217940Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451252693Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451287680Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451324657Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451357243Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451386944Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451416925Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451446390Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451475151Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451505233Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451541644Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451573582Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451603309Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451638215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451668200Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451697296Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451725490Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451753712Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451782374Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451814346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451849193Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451881412Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.451955043Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452002254Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452038715Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452080164Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452112996Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452179561Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452222835Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452318558Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452361174Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452390017Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452418763Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452453128Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452694667Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452780650Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452838989Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 04:28:55 NoKubernetes-347000 dockerd[511]: time="2024-07-19T04:28:55.452873542Z" level=info msg="containerd successfully booted in 0.022300s"
	Jul 19 04:28:56 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:56.461975431Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 04:28:56 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:56.472873137Z" level=info msg="Loading containers: start."
	Jul 19 04:28:56 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:56.566648155Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 04:28:56 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:56.637518024Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 19 04:28:56 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:56.688210786Z" level=info msg="Loading containers: done."
	Jul 19 04:28:56 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:56.698784977Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 04:28:56 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:56.698956766Z" level=info msg="Daemon has completed initialization"
	Jul 19 04:28:56 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:56.721844528Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 04:28:56 NoKubernetes-347000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 04:28:56 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:56.722042752Z" level=info msg="API listen on [::]:2376"
	Jul 19 04:28:57 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:57.707525196Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:28:57 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:57.708475185Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 04:28:57 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:57.708529777Z" level=info msg="Daemon shutdown complete"
	Jul 19 04:28:57 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:57.708562550Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 04:28:57 NoKubernetes-347000 dockerd[504]: time="2024-07-19T04:28:57.708575594Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 04:28:57 NoKubernetes-347000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 04:28:58 NoKubernetes-347000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 04:28:58 NoKubernetes-347000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:28:58 NoKubernetes-347000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:28:58 NoKubernetes-347000 dockerd[928]: time="2024-07-19T04:28:58.739229404Z" level=info msg="Starting up"
	Jul 19 04:29:58 NoKubernetes-347000 dockerd[928]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:29:58 NoKubernetes-347000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:29:58 NoKubernetes-347000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:29:58 NoKubernetes-347000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:193: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-347000 --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-347000 -n NoKubernetes-347000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-347000 -n NoKubernetes-347000: exit status 6 (152.027656ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0718 21:29:58.988479    6838 status.go:417] kubeconfig endpoint: get endpoint: "NoKubernetes-347000" does not appear in /Users/jenkins/minikube-integration/19302-1411/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-347000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/StartNoArgs (78.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (75.78s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
net_test.go:112: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p false-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : exit status 90 (1m15.758978886s)

                                                
                                                
-- stdout --
	* [false-709000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "false-709000" primary control-plane node in "false-709000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:31:35.245415    7353 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:31:35.245737    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:31:35.245743    7353 out.go:304] Setting ErrFile to fd 2...
	I0718 21:31:35.245747    7353 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:31:35.245947    7353 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 21:31:35.247787    7353 out.go:298] Setting JSON to false
	I0718 21:31:35.272337    7353 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":5469,"bootTime":1721358026,"procs":521,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0718 21:31:35.272446    7353 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 21:31:35.348193    7353 out.go:177] * [false-709000] minikube v1.33.1 on Darwin 14.5
	I0718 21:31:35.390771    7353 notify.go:220] Checking for updates...
	I0718 21:31:35.415349    7353 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 21:31:35.477796    7353 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 21:31:35.499578    7353 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 21:31:35.522585    7353 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 21:31:35.543606    7353 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	I0718 21:31:35.564523    7353 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 21:31:35.586470    7353 config.go:182] Loaded profile config "calico-709000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:31:35.586664    7353 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 21:31:35.616560    7353 out.go:177] * Using the hyperkit driver based on user configuration
	I0718 21:31:35.658517    7353 start.go:297] selected driver: hyperkit
	I0718 21:31:35.658542    7353 start.go:901] validating driver "hyperkit" against <nil>
	I0718 21:31:35.658583    7353 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 21:31:35.662954    7353 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:31:35.663112    7353 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1411/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0718 21:31:35.671697    7353 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0718 21:31:35.675953    7353 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:31:35.676006    7353 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0718 21:31:35.676037    7353 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 21:31:35.676245    7353 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0718 21:31:35.676289    7353 cni.go:84] Creating CNI manager for "false"
	I0718 21:31:35.676347    7353 start.go:340] cluster config:
	{Name:false-709000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:false-709000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime
:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSH
AgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 21:31:35.676456    7353 iso.go:125] acquiring lock: {Name:mka3a56e9fb30ac1fad44235cb5c998fd919cd8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 21:31:35.718441    7353 out.go:177] * Starting "false-709000" primary control-plane node in "false-709000" cluster
	I0718 21:31:35.739514    7353 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 21:31:35.739583    7353 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 21:31:35.739615    7353 cache.go:56] Caching tarball of preloaded images
	I0718 21:31:35.739828    7353 preload.go:172] Found /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0718 21:31:35.739850    7353 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0718 21:31:35.740038    7353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/false-709000/config.json ...
	I0718 21:31:35.740076    7353 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/false-709000/config.json: {Name:mkb3f13d486cf51287e11cdd1d869b2a87b1dc9c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 21:31:35.740721    7353 start.go:360] acquireMachinesLock for false-709000: {Name:mk8a0ac4b11cd5d9eba5ac8b9ae33317742f9112 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0718 21:31:35.740835    7353 start.go:364] duration metric: took 93.477µs to acquireMachinesLock for "false-709000"
	I0718 21:31:35.740874    7353 start.go:93] Provisioning new machine with config: &{Name:false-709000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:3072 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Ku
bernetesVersion:v1.30.3 ClusterName:false-709000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:false} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0718 21:31:35.740965    7353 start.go:125] createHost starting for "" (driver="hyperkit")
	I0718 21:31:35.783479    7353 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0718 21:31:35.783748    7353 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:31:35.783822    7353 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:31:35.793909    7353 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55576
	I0718 21:31:35.794280    7353 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:31:35.794703    7353 main.go:141] libmachine: Using API Version  1
	I0718 21:31:35.794719    7353 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:31:35.794933    7353 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:31:35.795056    7353 main.go:141] libmachine: (false-709000) Calling .GetMachineName
	I0718 21:31:35.795155    7353 main.go:141] libmachine: (false-709000) Calling .DriverName
	I0718 21:31:35.795260    7353 start.go:159] libmachine.API.Create for "false-709000" (driver="hyperkit")
	I0718 21:31:35.795284    7353 client.go:168] LocalClient.Create starting
	I0718 21:31:35.795316    7353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem
	I0718 21:31:35.795367    7353 main.go:141] libmachine: Decoding PEM data...
	I0718 21:31:35.795381    7353 main.go:141] libmachine: Parsing certificate...
	I0718 21:31:35.795430    7353 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem
	I0718 21:31:35.795470    7353 main.go:141] libmachine: Decoding PEM data...
	I0718 21:31:35.795483    7353 main.go:141] libmachine: Parsing certificate...
	I0718 21:31:35.795495    7353 main.go:141] libmachine: Running pre-create checks...
	I0718 21:31:35.795500    7353 main.go:141] libmachine: (false-709000) Calling .PreCreateCheck
	I0718 21:31:35.795586    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:35.795809    7353 main.go:141] libmachine: (false-709000) Calling .GetConfigRaw
	I0718 21:31:35.796277    7353 main.go:141] libmachine: Creating machine...
	I0718 21:31:35.796286    7353 main.go:141] libmachine: (false-709000) Calling .Create
	I0718 21:31:35.796352    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:35.796468    7353 main.go:141] libmachine: (false-709000) DBG | I0718 21:31:35.796347    7363 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19302-1411/.minikube
	I0718 21:31:35.796526    7353 main.go:141] libmachine: (false-709000) Downloading /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19302-1411/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0718 21:31:35.975549    7353 main.go:141] libmachine: (false-709000) DBG | I0718 21:31:35.975485    7363 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/id_rsa...
	I0718 21:31:36.145580    7353 main.go:141] libmachine: (false-709000) DBG | I0718 21:31:36.145492    7363 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/false-709000.rawdisk...
	I0718 21:31:36.145595    7353 main.go:141] libmachine: (false-709000) DBG | Writing magic tar header
	I0718 21:31:36.145603    7353 main.go:141] libmachine: (false-709000) DBG | Writing SSH key tar header
	I0718 21:31:36.146351    7353 main.go:141] libmachine: (false-709000) DBG | I0718 21:31:36.146265    7363 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000 ...
	I0718 21:31:36.505108    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:36.505128    7353 main.go:141] libmachine: (false-709000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/hyperkit.pid
	I0718 21:31:36.505154    7353 main.go:141] libmachine: (false-709000) DBG | Using UUID 9f9675a6-2456-4af3-8b01-3e7edb4067c9
	I0718 21:31:36.535303    7353 main.go:141] libmachine: (false-709000) DBG | Generated MAC be:8:7d:e5:59:9a
	I0718 21:31:36.535320    7353 main.go:141] libmachine: (false-709000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=false-709000
	I0718 21:31:36.535363    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9f9675a6-2456-4af3-8b01-3e7edb4067c9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0718 21:31:36.535398    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9f9675a6-2456-4af3-8b01-3e7edb4067c9", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/initrd", Bootrom:"", CPUs:2, Memory:3072, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0718 21:31:36.535437    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/hyperkit.pid", "-c", "2", "-m", "3072M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9f9675a6-2456-4af3-8b01-3e7edb4067c9", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/false-709000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/bzimage,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-
709000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=false-709000"}
	I0718 21:31:36.535467    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/hyperkit.pid -c 2 -m 3072M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9f9675a6-2456-4af3-8b01-3e7edb4067c9 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/false-709000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/tty,log=/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/console-ring -f kexec,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/bzimage,/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 cons
ole=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=false-709000"
	I0718 21:31:36.535481    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0718 21:31:36.538286    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 DEBUG: hyperkit: Pid is 7364
	I0718 21:31:36.538740    7353 main.go:141] libmachine: (false-709000) DBG | Attempt 0
	I0718 21:31:36.538753    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:36.538908    7353 main.go:141] libmachine: (false-709000) DBG | hyperkit pid from json: 7364
	I0718 21:31:36.539855    7353 main.go:141] libmachine: (false-709000) DBG | Searching for be:8:7d:e5:59:9a in /var/db/dhcpd_leases ...
	I0718 21:31:36.539972    7353 main.go:141] libmachine: (false-709000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0718 21:31:36.539993    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:76:69:1e:0:fd:8c ID:1,76:69:1e:0:fd:8c Lease:0x669b3d56}
	I0718 21:31:36.540010    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8a:78:d2:74:27:42 ID:1,8a:78:d2:74:27:42 Lease:0x669b3d2e}
	I0718 21:31:36.540021    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:da:ad:aa:85:e4:ef ID:1,da:ad:aa:85:e4:ef Lease:0x669b3d03}
	I0718 21:31:36.540032    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:8a:4c:82:b5:b9:fe ID:1,8a:4c:82:b5:b9:fe Lease:0x669b3cd2}
	I0718 21:31:36.540043    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:72:fb:d0:92:80:5e ID:1,72:fb:d0:92:80:5e Lease:0x6699eb5f}
	I0718 21:31:36.540054    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:42:4f:ae:89:6:58 ID:1,42:4f:ae:89:6:58 Lease:0x669b3c49}
	I0718 21:31:36.540067    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:a0:3:ba:bd:c9 ID:1,2e:a0:3:ba:bd:c9 Lease:0x669b3c6a}
	I0718 21:31:36.540078    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:86:4b:29:be:5b:d8 ID:1,86:4b:29:be:5b:d8 Lease:0x669b3c08}
	I0718 21:31:36.540089    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:56:b7:46:b5:45:39 ID:1,56:b7:46:b5:45:39 Lease:0x669b3b83}
	I0718 21:31:36.540099    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:52:2f:eb:ef:10:d ID:1,52:2f:eb:ef:10:d Lease:0x669b3b4c}
	I0718 21:31:36.540120    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:3e:79:98:ae:b1:97 ID:1,3e:79:98:ae:b1:97 Lease:0x669b3b3e}
	I0718 21:31:36.540144    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:16:a3:68:1b:ee:2f ID:1,16:a3:68:1b:ee:2f Lease:0x6699e9bf}
	I0718 21:31:36.540166    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:2:aa:9e:1e:9c:88 ID:1,2:aa:9e:1e:9c:88 Lease:0x669b3b12}
	I0718 21:31:36.540182    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:fa:57:50:16:6f:5d ID:1,fa:57:50:16:6f:5d Lease:0x6699e98c}
	I0718 21:31:36.540200    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:1e:af:f7:18:b5:e8 ID:1,1e:af:f7:18:b5:e8 Lease:0x669b3ad5}
	I0718 21:31:36.540215    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:6e:17:17:e8:8d:a7 ID:1,6e:17:17:e8:8d:a7 Lease:0x669b3a67}
	I0718 21:31:36.540231    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:36:c7:27:fa:6a:4e ID:1,36:c7:27:fa:6a:4e Lease:0x669b3a12}
	I0718 21:31:36.540243    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:fe:ea:f2:ee:32 ID:1,46:fe:ea:f2:ee:32 Lease:0x669b39e5}
	I0718 21:31:36.540256    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b9:df:ce:6b:51 ID:1,2:b9:df:ce:6b:51 Lease:0x6699e7c9}
	I0718 21:31:36.540265    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:4c:de:4f:d8:27 ID:1,6:4c:de:4f:d8:27 Lease:0x6699e6d1}
	I0718 21:31:36.540276    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:d2:59:42:45:2c ID:1,3a:d2:59:42:45:2c Lease:0x6699e7d7}
	I0718 21:31:36.540296    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:d2:e2:11:67:74:1c ID:1,d2:e2:11:67:74:1c Lease:0x669b386d}
	I0718 21:31:36.540313    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:aa:dd:3b:99:bf ID:1,a:aa:dd:3b:99:bf Lease:0x6699e547}
	I0718 21:31:36.540328    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:72:c0:82:86:ae:7b ID:1,72:c0:82:86:ae:7b Lease:0x6699e52d}
	I0718 21:31:36.540349    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:66:6c:75:31:3:1a ID:1,66:6c:75:31:3:1a Lease:0x6699e4fd}
	I0718 21:31:36.540362    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:4a:cc:c:f8:b6:77 ID:1,4a:cc:c:f8:b6:77 Lease:0x669b35c6}
	I0718 21:31:36.540375    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:eb:ab:51:e7:9a ID:1,7e:eb:ab:51:e7:9a Lease:0x669b35b5}
	I0718 21:31:36.540383    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:11:11:3f:ac:6d ID:1,a2:11:11:3f:ac:6d Lease:0x669b3576}
	I0718 21:31:36.540395    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:10:ef:df:f5:dd ID:1,ce:10:ef:df:f5:dd Lease:0x669b3547}
	I0718 21:31:36.540407    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ce:83:63:32:49:35 ID:1,ce:83:63:32:49:35 Lease:0x669b34e5}
	I0718 21:31:36.540418    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:da:90:24:7c:c3:59 ID:1,da:90:24:7c:c3:59 Lease:0x669b34ca}
	I0718 21:31:36.540486    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:c5:b:2c:73:ff ID:1,9e:c5:b:2c:73:ff Lease:0x6699e275}
	I0718 21:31:36.540516    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7a:cb:90:8a:a3:a2 ID:1,7a:cb:90:8a:a3:a2 Lease:0x669b34a2}
	I0718 21:31:36.540530    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:32:a2:3f:92:70:d7 ID:1,32:a2:3f:92:70:d7 Lease:0x669b341a}
	I0718 21:31:36.540544    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:ca:79:1e:6d ID:1,e:d3:ca:79:1e:6d Lease:0x669b30df}
	I0718 21:31:36.540562    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:fb:e4:e6:3d:70 ID:1,ca:fb:e4:e6:3d:70 Lease:0x6699debd}
	I0718 21:31:36.540577    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:de:8a:ce:de:23:52 ID:1,de:8a:ce:de:23:52 Lease:0x669b2e44}
	I0718 21:31:36.545824    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0718 21:31:36.554123    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0718 21:31:36.554970    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:31:36.555003    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:31:36.555021    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:31:36.555034    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:31:36.954573    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0718 21:31:36.954588    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:36 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0718 21:31:37.069011    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0718 21:31:37.069031    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0718 21:31:37.069057    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0718 21:31:37.069067    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:37 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0718 21:31:37.069892    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:37 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0718 21:31:37.069901    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:37 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0718 21:31:38.540946    7353 main.go:141] libmachine: (false-709000) DBG | Attempt 1
	I0718 21:31:38.540962    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:38.541033    7353 main.go:141] libmachine: (false-709000) DBG | hyperkit pid from json: 7364
	I0718 21:31:38.541893    7353 main.go:141] libmachine: (false-709000) DBG | Searching for be:8:7d:e5:59:9a in /var/db/dhcpd_leases ...
	I0718 21:31:38.541950    7353 main.go:141] libmachine: (false-709000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0718 21:31:38.541965    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:76:69:1e:0:fd:8c ID:1,76:69:1e:0:fd:8c Lease:0x669b3d56}
	I0718 21:31:38.541989    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8a:78:d2:74:27:42 ID:1,8a:78:d2:74:27:42 Lease:0x669b3d2e}
	I0718 21:31:38.541998    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:da:ad:aa:85:e4:ef ID:1,da:ad:aa:85:e4:ef Lease:0x669b3d03}
	I0718 21:31:38.542009    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:8a:4c:82:b5:b9:fe ID:1,8a:4c:82:b5:b9:fe Lease:0x669b3cd2}
	I0718 21:31:38.542019    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:72:fb:d0:92:80:5e ID:1,72:fb:d0:92:80:5e Lease:0x6699eb5f}
	I0718 21:31:38.542028    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:42:4f:ae:89:6:58 ID:1,42:4f:ae:89:6:58 Lease:0x669b3c49}
	I0718 21:31:38.542036    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:a0:3:ba:bd:c9 ID:1,2e:a0:3:ba:bd:c9 Lease:0x669b3c6a}
	I0718 21:31:38.542044    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:86:4b:29:be:5b:d8 ID:1,86:4b:29:be:5b:d8 Lease:0x669b3c08}
	I0718 21:31:38.542054    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:56:b7:46:b5:45:39 ID:1,56:b7:46:b5:45:39 Lease:0x669b3b83}
	I0718 21:31:38.542062    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:52:2f:eb:ef:10:d ID:1,52:2f:eb:ef:10:d Lease:0x669b3b4c}
	I0718 21:31:38.542068    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:3e:79:98:ae:b1:97 ID:1,3e:79:98:ae:b1:97 Lease:0x669b3b3e}
	I0718 21:31:38.542077    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:16:a3:68:1b:ee:2f ID:1,16:a3:68:1b:ee:2f Lease:0x6699e9bf}
	I0718 21:31:38.542082    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:2:aa:9e:1e:9c:88 ID:1,2:aa:9e:1e:9c:88 Lease:0x669b3b12}
	I0718 21:31:38.542096    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:fa:57:50:16:6f:5d ID:1,fa:57:50:16:6f:5d Lease:0x6699e98c}
	I0718 21:31:38.542104    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:1e:af:f7:18:b5:e8 ID:1,1e:af:f7:18:b5:e8 Lease:0x669b3ad5}
	I0718 21:31:38.542111    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:6e:17:17:e8:8d:a7 ID:1,6e:17:17:e8:8d:a7 Lease:0x669b3a67}
	I0718 21:31:38.542116    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:36:c7:27:fa:6a:4e ID:1,36:c7:27:fa:6a:4e Lease:0x669b3a12}
	I0718 21:31:38.542130    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:fe:ea:f2:ee:32 ID:1,46:fe:ea:f2:ee:32 Lease:0x669b39e5}
	I0718 21:31:38.542138    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b9:df:ce:6b:51 ID:1,2:b9:df:ce:6b:51 Lease:0x6699e7c9}
	I0718 21:31:38.542149    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:4c:de:4f:d8:27 ID:1,6:4c:de:4f:d8:27 Lease:0x6699e6d1}
	I0718 21:31:38.542157    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:d2:59:42:45:2c ID:1,3a:d2:59:42:45:2c Lease:0x6699e7d7}
	I0718 21:31:38.542164    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:d2:e2:11:67:74:1c ID:1,d2:e2:11:67:74:1c Lease:0x669b386d}
	I0718 21:31:38.542173    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:aa:dd:3b:99:bf ID:1,a:aa:dd:3b:99:bf Lease:0x6699e547}
	I0718 21:31:38.542189    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:72:c0:82:86:ae:7b ID:1,72:c0:82:86:ae:7b Lease:0x6699e52d}
	I0718 21:31:38.542201    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:66:6c:75:31:3:1a ID:1,66:6c:75:31:3:1a Lease:0x6699e4fd}
	I0718 21:31:38.542209    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:4a:cc:c:f8:b6:77 ID:1,4a:cc:c:f8:b6:77 Lease:0x669b35c6}
	I0718 21:31:38.542216    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:eb:ab:51:e7:9a ID:1,7e:eb:ab:51:e7:9a Lease:0x669b35b5}
	I0718 21:31:38.542222    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:11:11:3f:ac:6d ID:1,a2:11:11:3f:ac:6d Lease:0x669b3576}
	I0718 21:31:38.542234    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:10:ef:df:f5:dd ID:1,ce:10:ef:df:f5:dd Lease:0x669b3547}
	I0718 21:31:38.542246    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ce:83:63:32:49:35 ID:1,ce:83:63:32:49:35 Lease:0x669b34e5}
	I0718 21:31:38.542261    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:da:90:24:7c:c3:59 ID:1,da:90:24:7c:c3:59 Lease:0x669b34ca}
	I0718 21:31:38.542268    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:c5:b:2c:73:ff ID:1,9e:c5:b:2c:73:ff Lease:0x6699e275}
	I0718 21:31:38.542274    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7a:cb:90:8a:a3:a2 ID:1,7a:cb:90:8a:a3:a2 Lease:0x669b34a2}
	I0718 21:31:38.542288    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:32:a2:3f:92:70:d7 ID:1,32:a2:3f:92:70:d7 Lease:0x669b341a}
	I0718 21:31:38.542296    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:ca:79:1e:6d ID:1,e:d3:ca:79:1e:6d Lease:0x669b30df}
	I0718 21:31:38.542303    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:fb:e4:e6:3d:70 ID:1,ca:fb:e4:e6:3d:70 Lease:0x6699debd}
	I0718 21:31:38.542310    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:de:8a:ce:de:23:52 ID:1,de:8a:ce:de:23:52 Lease:0x669b2e44}
	I0718 21:31:40.542313    7353 main.go:141] libmachine: (false-709000) DBG | Attempt 2
	I0718 21:31:40.542330    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:40.542445    7353 main.go:141] libmachine: (false-709000) DBG | hyperkit pid from json: 7364
	I0718 21:31:40.543366    7353 main.go:141] libmachine: (false-709000) DBG | Searching for be:8:7d:e5:59:9a in /var/db/dhcpd_leases ...
	I0718 21:31:40.543439    7353 main.go:141] libmachine: (false-709000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0718 21:31:40.543449    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:76:69:1e:0:fd:8c ID:1,76:69:1e:0:fd:8c Lease:0x669b3d56}
	I0718 21:31:40.543460    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8a:78:d2:74:27:42 ID:1,8a:78:d2:74:27:42 Lease:0x669b3d2e}
	I0718 21:31:40.543472    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:da:ad:aa:85:e4:ef ID:1,da:ad:aa:85:e4:ef Lease:0x669b3d03}
	I0718 21:31:40.543479    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:8a:4c:82:b5:b9:fe ID:1,8a:4c:82:b5:b9:fe Lease:0x669b3cd2}
	I0718 21:31:40.543485    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:72:fb:d0:92:80:5e ID:1,72:fb:d0:92:80:5e Lease:0x6699eb5f}
	I0718 21:31:40.543492    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:42:4f:ae:89:6:58 ID:1,42:4f:ae:89:6:58 Lease:0x669b3c49}
	I0718 21:31:40.543498    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:a0:3:ba:bd:c9 ID:1,2e:a0:3:ba:bd:c9 Lease:0x669b3c6a}
	I0718 21:31:40.543503    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:86:4b:29:be:5b:d8 ID:1,86:4b:29:be:5b:d8 Lease:0x669b3c08}
	I0718 21:31:40.543514    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:56:b7:46:b5:45:39 ID:1,56:b7:46:b5:45:39 Lease:0x669b3b83}
	I0718 21:31:40.543522    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:52:2f:eb:ef:10:d ID:1,52:2f:eb:ef:10:d Lease:0x669b3b4c}
	I0718 21:31:40.543546    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:3e:79:98:ae:b1:97 ID:1,3e:79:98:ae:b1:97 Lease:0x669b3b3e}
	I0718 21:31:40.543556    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:16:a3:68:1b:ee:2f ID:1,16:a3:68:1b:ee:2f Lease:0x6699e9bf}
	I0718 21:31:40.543563    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:2:aa:9e:1e:9c:88 ID:1,2:aa:9e:1e:9c:88 Lease:0x669b3b12}
	I0718 21:31:40.543569    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:fa:57:50:16:6f:5d ID:1,fa:57:50:16:6f:5d Lease:0x6699e98c}
	I0718 21:31:40.543578    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:1e:af:f7:18:b5:e8 ID:1,1e:af:f7:18:b5:e8 Lease:0x669b3ad5}
	I0718 21:31:40.543586    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:6e:17:17:e8:8d:a7 ID:1,6e:17:17:e8:8d:a7 Lease:0x669b3a67}
	I0718 21:31:40.543592    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:36:c7:27:fa:6a:4e ID:1,36:c7:27:fa:6a:4e Lease:0x669b3a12}
	I0718 21:31:40.543598    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:fe:ea:f2:ee:32 ID:1,46:fe:ea:f2:ee:32 Lease:0x669b39e5}
	I0718 21:31:40.543611    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b9:df:ce:6b:51 ID:1,2:b9:df:ce:6b:51 Lease:0x6699e7c9}
	I0718 21:31:40.543624    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:4c:de:4f:d8:27 ID:1,6:4c:de:4f:d8:27 Lease:0x6699e6d1}
	I0718 21:31:40.543633    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:d2:59:42:45:2c ID:1,3a:d2:59:42:45:2c Lease:0x6699e7d7}
	I0718 21:31:40.543641    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:d2:e2:11:67:74:1c ID:1,d2:e2:11:67:74:1c Lease:0x669b386d}
	I0718 21:31:40.543647    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:aa:dd:3b:99:bf ID:1,a:aa:dd:3b:99:bf Lease:0x6699e547}
	I0718 21:31:40.543654    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:72:c0:82:86:ae:7b ID:1,72:c0:82:86:ae:7b Lease:0x6699e52d}
	I0718 21:31:40.543661    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:66:6c:75:31:3:1a ID:1,66:6c:75:31:3:1a Lease:0x6699e4fd}
	I0718 21:31:40.543669    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:4a:cc:c:f8:b6:77 ID:1,4a:cc:c:f8:b6:77 Lease:0x669b35c6}
	I0718 21:31:40.543682    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:eb:ab:51:e7:9a ID:1,7e:eb:ab:51:e7:9a Lease:0x669b35b5}
	I0718 21:31:40.543690    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:11:11:3f:ac:6d ID:1,a2:11:11:3f:ac:6d Lease:0x669b3576}
	I0718 21:31:40.543697    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:10:ef:df:f5:dd ID:1,ce:10:ef:df:f5:dd Lease:0x669b3547}
	I0718 21:31:40.543705    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ce:83:63:32:49:35 ID:1,ce:83:63:32:49:35 Lease:0x669b34e5}
	I0718 21:31:40.543712    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:da:90:24:7c:c3:59 ID:1,da:90:24:7c:c3:59 Lease:0x669b34ca}
	I0718 21:31:40.543719    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:c5:b:2c:73:ff ID:1,9e:c5:b:2c:73:ff Lease:0x6699e275}
	I0718 21:31:40.543737    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7a:cb:90:8a:a3:a2 ID:1,7a:cb:90:8a:a3:a2 Lease:0x669b34a2}
	I0718 21:31:40.543752    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:32:a2:3f:92:70:d7 ID:1,32:a2:3f:92:70:d7 Lease:0x669b341a}
	I0718 21:31:40.543764    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:ca:79:1e:6d ID:1,e:d3:ca:79:1e:6d Lease:0x669b30df}
	I0718 21:31:40.543773    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:fb:e4:e6:3d:70 ID:1,ca:fb:e4:e6:3d:70 Lease:0x6699debd}
	I0718 21:31:40.543783    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:de:8a:ce:de:23:52 ID:1,de:8a:ce:de:23:52 Lease:0x669b2e44}
	I0718 21:31:42.384696    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:42 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0718 21:31:42.384737    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:42 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0718 21:31:42.384750    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:42 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0718 21:31:42.407768    7353 main.go:141] libmachine: (false-709000) DBG | 2024/07/18 21:31:42 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0718 21:31:42.545413    7353 main.go:141] libmachine: (false-709000) DBG | Attempt 3
	I0718 21:31:42.545428    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:42.545540    7353 main.go:141] libmachine: (false-709000) DBG | hyperkit pid from json: 7364
	I0718 21:31:42.546395    7353 main.go:141] libmachine: (false-709000) DBG | Searching for be:8:7d:e5:59:9a in /var/db/dhcpd_leases ...
	I0718 21:31:42.546465    7353 main.go:141] libmachine: (false-709000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0718 21:31:42.546473    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:76:69:1e:0:fd:8c ID:1,76:69:1e:0:fd:8c Lease:0x669b3d56}
	I0718 21:31:42.546483    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8a:78:d2:74:27:42 ID:1,8a:78:d2:74:27:42 Lease:0x669b3d2e}
	I0718 21:31:42.546498    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:da:ad:aa:85:e4:ef ID:1,da:ad:aa:85:e4:ef Lease:0x669b3d03}
	I0718 21:31:42.546505    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:8a:4c:82:b5:b9:fe ID:1,8a:4c:82:b5:b9:fe Lease:0x669b3cd2}
	I0718 21:31:42.546512    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:72:fb:d0:92:80:5e ID:1,72:fb:d0:92:80:5e Lease:0x6699eb5f}
	I0718 21:31:42.546518    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:42:4f:ae:89:6:58 ID:1,42:4f:ae:89:6:58 Lease:0x669b3c49}
	I0718 21:31:42.546530    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:a0:3:ba:bd:c9 ID:1,2e:a0:3:ba:bd:c9 Lease:0x669b3c6a}
	I0718 21:31:42.546537    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:86:4b:29:be:5b:d8 ID:1,86:4b:29:be:5b:d8 Lease:0x669b3c08}
	I0718 21:31:42.546544    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:56:b7:46:b5:45:39 ID:1,56:b7:46:b5:45:39 Lease:0x669b3b83}
	I0718 21:31:42.546552    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:52:2f:eb:ef:10:d ID:1,52:2f:eb:ef:10:d Lease:0x669b3b4c}
	I0718 21:31:42.546559    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:3e:79:98:ae:b1:97 ID:1,3e:79:98:ae:b1:97 Lease:0x669b3b3e}
	I0718 21:31:42.546564    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:16:a3:68:1b:ee:2f ID:1,16:a3:68:1b:ee:2f Lease:0x6699e9bf}
	I0718 21:31:42.546580    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:2:aa:9e:1e:9c:88 ID:1,2:aa:9e:1e:9c:88 Lease:0x669b3b12}
	I0718 21:31:42.546592    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:fa:57:50:16:6f:5d ID:1,fa:57:50:16:6f:5d Lease:0x6699e98c}
	I0718 21:31:42.546602    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:1e:af:f7:18:b5:e8 ID:1,1e:af:f7:18:b5:e8 Lease:0x669b3ad5}
	I0718 21:31:42.546612    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:6e:17:17:e8:8d:a7 ID:1,6e:17:17:e8:8d:a7 Lease:0x669b3a67}
	I0718 21:31:42.546625    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:36:c7:27:fa:6a:4e ID:1,36:c7:27:fa:6a:4e Lease:0x669b3a12}
	I0718 21:31:42.546635    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:fe:ea:f2:ee:32 ID:1,46:fe:ea:f2:ee:32 Lease:0x669b39e5}
	I0718 21:31:42.546651    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b9:df:ce:6b:51 ID:1,2:b9:df:ce:6b:51 Lease:0x6699e7c9}
	I0718 21:31:42.546664    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:4c:de:4f:d8:27 ID:1,6:4c:de:4f:d8:27 Lease:0x6699e6d1}
	I0718 21:31:42.546673    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:d2:59:42:45:2c ID:1,3a:d2:59:42:45:2c Lease:0x6699e7d7}
	I0718 21:31:42.546681    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:d2:e2:11:67:74:1c ID:1,d2:e2:11:67:74:1c Lease:0x669b386d}
	I0718 21:31:42.546690    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:aa:dd:3b:99:bf ID:1,a:aa:dd:3b:99:bf Lease:0x6699e547}
	I0718 21:31:42.546696    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:72:c0:82:86:ae:7b ID:1,72:c0:82:86:ae:7b Lease:0x6699e52d}
	I0718 21:31:42.546708    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:66:6c:75:31:3:1a ID:1,66:6c:75:31:3:1a Lease:0x6699e4fd}
	I0718 21:31:42.546721    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:4a:cc:c:f8:b6:77 ID:1,4a:cc:c:f8:b6:77 Lease:0x669b35c6}
	I0718 21:31:42.546729    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:eb:ab:51:e7:9a ID:1,7e:eb:ab:51:e7:9a Lease:0x669b35b5}
	I0718 21:31:42.546738    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:11:11:3f:ac:6d ID:1,a2:11:11:3f:ac:6d Lease:0x669b3576}
	I0718 21:31:42.546745    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:10:ef:df:f5:dd ID:1,ce:10:ef:df:f5:dd Lease:0x669b3547}
	I0718 21:31:42.546752    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ce:83:63:32:49:35 ID:1,ce:83:63:32:49:35 Lease:0x669b34e5}
	I0718 21:31:42.546759    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:da:90:24:7c:c3:59 ID:1,da:90:24:7c:c3:59 Lease:0x669b34ca}
	I0718 21:31:42.546766    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:c5:b:2c:73:ff ID:1,9e:c5:b:2c:73:ff Lease:0x6699e275}
	I0718 21:31:42.546773    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7a:cb:90:8a:a3:a2 ID:1,7a:cb:90:8a:a3:a2 Lease:0x669b34a2}
	I0718 21:31:42.546781    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:32:a2:3f:92:70:d7 ID:1,32:a2:3f:92:70:d7 Lease:0x669b341a}
	I0718 21:31:42.546788    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:ca:79:1e:6d ID:1,e:d3:ca:79:1e:6d Lease:0x669b30df}
	I0718 21:31:42.546795    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:fb:e4:e6:3d:70 ID:1,ca:fb:e4:e6:3d:70 Lease:0x6699debd}
	I0718 21:31:42.546803    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:de:8a:ce:de:23:52 ID:1,de:8a:ce:de:23:52 Lease:0x669b2e44}
	I0718 21:31:44.546809    7353 main.go:141] libmachine: (false-709000) DBG | Attempt 4
	I0718 21:31:44.546824    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:44.546933    7353 main.go:141] libmachine: (false-709000) DBG | hyperkit pid from json: 7364
	I0718 21:31:44.547756    7353 main.go:141] libmachine: (false-709000) DBG | Searching for be:8:7d:e5:59:9a in /var/db/dhcpd_leases ...
	I0718 21:31:44.547824    7353 main.go:141] libmachine: (false-709000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0718 21:31:44.547833    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:76:69:1e:0:fd:8c ID:1,76:69:1e:0:fd:8c Lease:0x669b3d56}
	I0718 21:31:44.547841    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:8a:78:d2:74:27:42 ID:1,8a:78:d2:74:27:42 Lease:0x669b3d2e}
	I0718 21:31:44.547846    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:da:ad:aa:85:e4:ef ID:1,da:ad:aa:85:e4:ef Lease:0x669b3d03}
	I0718 21:31:44.547864    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:8a:4c:82:b5:b9:fe ID:1,8a:4c:82:b5:b9:fe Lease:0x669b3cd2}
	I0718 21:31:44.547874    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:72:fb:d0:92:80:5e ID:1,72:fb:d0:92:80:5e Lease:0x6699eb5f}
	I0718 21:31:44.547896    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:42:4f:ae:89:6:58 ID:1,42:4f:ae:89:6:58 Lease:0x669b3c49}
	I0718 21:31:44.547904    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:2e:a0:3:ba:bd:c9 ID:1,2e:a0:3:ba:bd:c9 Lease:0x669b3c6a}
	I0718 21:31:44.547911    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:86:4b:29:be:5b:d8 ID:1,86:4b:29:be:5b:d8 Lease:0x669b3c08}
	I0718 21:31:44.547918    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:56:b7:46:b5:45:39 ID:1,56:b7:46:b5:45:39 Lease:0x669b3b83}
	I0718 21:31:44.547931    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:52:2f:eb:ef:10:d ID:1,52:2f:eb:ef:10:d Lease:0x669b3b4c}
	I0718 21:31:44.547941    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:3e:79:98:ae:b1:97 ID:1,3e:79:98:ae:b1:97 Lease:0x669b3b3e}
	I0718 21:31:44.547957    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:16:a3:68:1b:ee:2f ID:1,16:a3:68:1b:ee:2f Lease:0x6699e9bf}
	I0718 21:31:44.547970    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:2:aa:9e:1e:9c:88 ID:1,2:aa:9e:1e:9c:88 Lease:0x669b3b12}
	I0718 21:31:44.547994    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:fa:57:50:16:6f:5d ID:1,fa:57:50:16:6f:5d Lease:0x6699e98c}
	I0718 21:31:44.548004    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:1e:af:f7:18:b5:e8 ID:1,1e:af:f7:18:b5:e8 Lease:0x669b3ad5}
	I0718 21:31:44.548015    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:6e:17:17:e8:8d:a7 ID:1,6e:17:17:e8:8d:a7 Lease:0x669b3a67}
	I0718 21:31:44.548027    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:36:c7:27:fa:6a:4e ID:1,36:c7:27:fa:6a:4e Lease:0x669b3a12}
	I0718 21:31:44.548035    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:46:fe:ea:f2:ee:32 ID:1,46:fe:ea:f2:ee:32 Lease:0x669b39e5}
	I0718 21:31:44.548043    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:2:b9:df:ce:6b:51 ID:1,2:b9:df:ce:6b:51 Lease:0x6699e7c9}
	I0718 21:31:44.548050    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:6:4c:de:4f:d8:27 ID:1,6:4c:de:4f:d8:27 Lease:0x6699e6d1}
	I0718 21:31:44.548059    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:3a:d2:59:42:45:2c ID:1,3a:d2:59:42:45:2c Lease:0x6699e7d7}
	I0718 21:31:44.548073    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:d2:e2:11:67:74:1c ID:1,d2:e2:11:67:74:1c Lease:0x669b386d}
	I0718 21:31:44.548086    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:a:aa:dd:3b:99:bf ID:1,a:aa:dd:3b:99:bf Lease:0x6699e547}
	I0718 21:31:44.548094    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:72:c0:82:86:ae:7b ID:1,72:c0:82:86:ae:7b Lease:0x6699e52d}
	I0718 21:31:44.548100    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:66:6c:75:31:3:1a ID:1,66:6c:75:31:3:1a Lease:0x6699e4fd}
	I0718 21:31:44.548107    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:4a:cc:c:f8:b6:77 ID:1,4a:cc:c:f8:b6:77 Lease:0x669b35c6}
	I0718 21:31:44.548115    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:7e:eb:ab:51:e7:9a ID:1,7e:eb:ab:51:e7:9a Lease:0x669b35b5}
	I0718 21:31:44.548122    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a2:11:11:3f:ac:6d ID:1,a2:11:11:3f:ac:6d Lease:0x669b3576}
	I0718 21:31:44.548131    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ce:10:ef:df:f5:dd ID:1,ce:10:ef:df:f5:dd Lease:0x669b3547}
	I0718 21:31:44.548153    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:ce:83:63:32:49:35 ID:1,ce:83:63:32:49:35 Lease:0x669b34e5}
	I0718 21:31:44.548182    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:da:90:24:7c:c3:59 ID:1,da:90:24:7c:c3:59 Lease:0x669b34ca}
	I0718 21:31:44.548193    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:9e:c5:b:2c:73:ff ID:1,9e:c5:b:2c:73:ff Lease:0x6699e275}
	I0718 21:31:44.548202    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:7a:cb:90:8a:a3:a2 ID:1,7a:cb:90:8a:a3:a2 Lease:0x669b34a2}
	I0718 21:31:44.548209    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:32:a2:3f:92:70:d7 ID:1,32:a2:3f:92:70:d7 Lease:0x669b341a}
	I0718 21:31:44.548217    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:ca:79:1e:6d ID:1,e:d3:ca:79:1e:6d Lease:0x669b30df}
	I0718 21:31:44.548227    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:ca:fb:e4:e6:3d:70 ID:1,ca:fb:e4:e6:3d:70 Lease:0x6699debd}
	I0718 21:31:44.548239    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:de:8a:ce:de:23:52 ID:1,de:8a:ce:de:23:52 Lease:0x669b2e44}
	I0718 21:31:46.549873    7353 main.go:141] libmachine: (false-709000) DBG | Attempt 5
	I0718 21:31:46.549884    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:46.549963    7353 main.go:141] libmachine: (false-709000) DBG | hyperkit pid from json: 7364
	I0718 21:31:46.550797    7353 main.go:141] libmachine: (false-709000) DBG | Searching for be:8:7d:e5:59:9a in /var/db/dhcpd_leases ...
	I0718 21:31:46.550879    7353 main.go:141] libmachine: (false-709000) DBG | Found 38 entries in /var/db/dhcpd_leases!
	I0718 21:31:46.550910    7353 main.go:141] libmachine: (false-709000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:be:8:7d:e5:59:9a ID:1,be:8:7d:e5:59:9a Lease:0x669b3db1}
	I0718 21:31:46.550924    7353 main.go:141] libmachine: (false-709000) DBG | Found match: be:8:7d:e5:59:9a
	I0718 21:31:46.550946    7353 main.go:141] libmachine: (false-709000) DBG | IP: 192.169.0.39
	I0718 21:31:46.550961    7353 main.go:141] libmachine: (false-709000) Calling .GetConfigRaw
	I0718 21:31:46.551599    7353 main.go:141] libmachine: (false-709000) Calling .DriverName
	I0718 21:31:46.551694    7353 main.go:141] libmachine: (false-709000) Calling .DriverName
	I0718 21:31:46.551786    7353 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0718 21:31:46.551794    7353 main.go:141] libmachine: (false-709000) Calling .GetState
	I0718 21:31:46.551868    7353 main.go:141] libmachine: (false-709000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:31:46.551942    7353 main.go:141] libmachine: (false-709000) DBG | hyperkit pid from json: 7364
	I0718 21:31:46.552762    7353 main.go:141] libmachine: Detecting operating system of created instance...
	I0718 21:31:46.552774    7353 main.go:141] libmachine: Waiting for SSH to be available...
	I0718 21:31:46.552781    7353 main.go:141] libmachine: Getting to WaitForSSH function...
	I0718 21:31:46.552787    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:46.552878    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:46.552980    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:46.553067    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:46.553164    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:46.553831    7353 main.go:141] libmachine: Using SSH client type: native
	I0718 21:31:46.554039    7353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3020c0] 0xb304e20 <nil>  [] 0s} 192.169.0.39 22 <nil> <nil>}
	I0718 21:31:46.554047    7353 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0718 21:31:46.616480    7353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:31:46.616493    7353 main.go:141] libmachine: Detecting the provisioner...
	I0718 21:31:46.616498    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:46.616636    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:46.616741    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:46.616841    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:46.616943    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:46.617098    7353 main.go:141] libmachine: Using SSH client type: native
	I0718 21:31:46.617245    7353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3020c0] 0xb304e20 <nil>  [] 0s} 192.169.0.39 22 <nil> <nil>}
	I0718 21:31:46.617253    7353 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0718 21:31:46.686093    7353 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0718 21:31:46.686178    7353 main.go:141] libmachine: found compatible host: buildroot
	I0718 21:31:46.686188    7353 main.go:141] libmachine: Provisioning with buildroot...
	I0718 21:31:46.686193    7353 main.go:141] libmachine: (false-709000) Calling .GetMachineName
	I0718 21:31:46.686341    7353 buildroot.go:166] provisioning hostname "false-709000"
	I0718 21:31:46.686351    7353 main.go:141] libmachine: (false-709000) Calling .GetMachineName
	I0718 21:31:46.686449    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:46.686552    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:46.686645    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:46.686722    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:46.686840    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:46.686978    7353 main.go:141] libmachine: Using SSH client type: native
	I0718 21:31:46.687136    7353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3020c0] 0xb304e20 <nil>  [] 0s} 192.169.0.39 22 <nil> <nil>}
	I0718 21:31:46.687144    7353 main.go:141] libmachine: About to run SSH command:
	sudo hostname false-709000 && echo "false-709000" | sudo tee /etc/hostname
	I0718 21:31:46.758342    7353 main.go:141] libmachine: SSH cmd err, output: <nil>: false-709000
	
	I0718 21:31:46.758361    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:46.758514    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:46.758616    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:46.758709    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:46.758795    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:46.758926    7353 main.go:141] libmachine: Using SSH client type: native
	I0718 21:31:46.759077    7353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3020c0] 0xb304e20 <nil>  [] 0s} 192.169.0.39 22 <nil> <nil>}
	I0718 21:31:46.759089    7353 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sfalse-709000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 false-709000/g' /etc/hosts;
				else 
					echo '127.0.1.1 false-709000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0718 21:31:46.825162    7353 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0718 21:31:46.825182    7353 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19302-1411/.minikube CaCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19302-1411/.minikube}
	I0718 21:31:46.825193    7353 buildroot.go:174] setting up certificates
	I0718 21:31:46.825204    7353 provision.go:84] configureAuth start
	I0718 21:31:46.825211    7353 main.go:141] libmachine: (false-709000) Calling .GetMachineName
	I0718 21:31:46.825347    7353 main.go:141] libmachine: (false-709000) Calling .GetIP
	I0718 21:31:46.825437    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:46.825527    7353 provision.go:143] copyHostCerts
	I0718 21:31:46.825619    7353 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem, removing ...
	I0718 21:31:46.825632    7353 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem
	I0718 21:31:46.825774    7353 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/cert.pem (1123 bytes)
	I0718 21:31:46.826001    7353 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem, removing ...
	I0718 21:31:46.826009    7353 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem
	I0718 21:31:46.826127    7353 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/key.pem (1675 bytes)
	I0718 21:31:46.826305    7353 exec_runner.go:144] found /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem, removing ...
	I0718 21:31:46.826312    7353 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem
	I0718 21:31:46.826398    7353 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19302-1411/.minikube/ca.pem (1082 bytes)
	I0718 21:31:46.826549    7353 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca-key.pem org=jenkins.false-709000 san=[127.0.0.1 192.169.0.39 false-709000 localhost minikube]
	I0718 21:31:46.919033    7353 provision.go:177] copyRemoteCerts
	I0718 21:31:46.919093    7353 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0718 21:31:46.919111    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:46.919265    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:46.919379    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:46.919482    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:46.919581    7353 sshutil.go:53] new ssh client: &{IP:192.169.0.39 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/id_rsa Username:docker}
	I0718 21:31:46.964067    7353 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0718 21:31:46.984459    7353 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server.pem --> /etc/docker/server.pem (1204 bytes)
	I0718 21:31:47.004861    7353 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0718 21:31:47.025615    7353 provision.go:87] duration metric: took 200.388888ms to configureAuth
	I0718 21:31:47.025631    7353 buildroot.go:189] setting minikube options for container-runtime
	I0718 21:31:47.025768    7353 config.go:182] Loaded profile config "false-709000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:31:47.025783    7353 main.go:141] libmachine: (false-709000) Calling .DriverName
	I0718 21:31:47.025925    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:47.026026    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:47.026120    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:47.026212    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:47.026318    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:47.026433    7353 main.go:141] libmachine: Using SSH client type: native
	I0718 21:31:47.026570    7353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3020c0] 0xb304e20 <nil>  [] 0s} 192.169.0.39 22 <nil> <nil>}
	I0718 21:31:47.026578    7353 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0718 21:31:47.086863    7353 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0718 21:31:47.086875    7353 buildroot.go:70] root file system type: tmpfs
	I0718 21:31:47.086952    7353 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0718 21:31:47.086967    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:47.087095    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:47.087195    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:47.087298    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:47.087400    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:47.087556    7353 main.go:141] libmachine: Using SSH client type: native
	I0718 21:31:47.087691    7353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3020c0] 0xb304e20 <nil>  [] 0s} 192.169.0.39 22 <nil> <nil>}
	I0718 21:31:47.087734    7353 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0718 21:31:47.160270    7353 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0718 21:31:47.160289    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:47.160439    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:47.160537    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:47.160649    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:47.160742    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:47.160862    7353 main.go:141] libmachine: Using SSH client type: native
	I0718 21:31:47.161013    7353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3020c0] 0xb304e20 <nil>  [] 0s} 192.169.0.39 22 <nil> <nil>}
	I0718 21:31:47.161026    7353 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0718 21:31:48.742452    7353 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0718 21:31:48.742468    7353 main.go:141] libmachine: Checking connection to Docker...
	I0718 21:31:48.742475    7353 main.go:141] libmachine: (false-709000) Calling .GetURL
	I0718 21:31:48.742629    7353 main.go:141] libmachine: Docker is up and running!
	I0718 21:31:48.742636    7353 main.go:141] libmachine: Reticulating splines...
	I0718 21:31:48.742640    7353 client.go:171] duration metric: took 12.946963113s to LocalClient.Create
	I0718 21:31:48.742652    7353 start.go:167] duration metric: took 12.947004807s to libmachine.API.Create "false-709000"
	I0718 21:31:48.742662    7353 start.go:293] postStartSetup for "false-709000" (driver="hyperkit")
	I0718 21:31:48.742669    7353 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0718 21:31:48.742679    7353 main.go:141] libmachine: (false-709000) Calling .DriverName
	I0718 21:31:48.742844    7353 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0718 21:31:48.742857    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:48.742942    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:48.743024    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:48.743107    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:48.743203    7353 sshutil.go:53] new ssh client: &{IP:192.169.0.39 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/id_rsa Username:docker}
	I0718 21:31:48.785863    7353 ssh_runner.go:195] Run: cat /etc/os-release
	I0718 21:31:48.790348    7353 info.go:137] Remote host: Buildroot 2023.02.9
	I0718 21:31:48.790367    7353 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/addons for local assets ...
	I0718 21:31:48.790501    7353 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19302-1411/.minikube/files for local assets ...
	I0718 21:31:48.790912    7353 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem -> 19482.pem in /etc/ssl/certs
	I0718 21:31:48.791138    7353 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0718 21:31:48.800179    7353 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/ssl/certs/19482.pem --> /etc/ssl/certs/19482.pem (1708 bytes)
	I0718 21:31:48.830977    7353 start.go:296] duration metric: took 88.294727ms for postStartSetup
	I0718 21:31:48.831006    7353 main.go:141] libmachine: (false-709000) Calling .GetConfigRaw
	I0718 21:31:48.831610    7353 main.go:141] libmachine: (false-709000) Calling .GetIP
	I0718 21:31:48.831754    7353 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/false-709000/config.json ...
	I0718 21:31:48.832107    7353 start.go:128] duration metric: took 13.090733857s to createHost
	I0718 21:31:48.832124    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:48.832216    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:48.832304    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:48.832375    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:48.832454    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:48.832549    7353 main.go:141] libmachine: Using SSH client type: native
	I0718 21:31:48.832665    7353 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb3020c0] 0xb304e20 <nil>  [] 0s} 192.169.0.39 22 <nil> <nil>}
	I0718 21:31:48.832672    7353 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0718 21:31:48.894282    7353 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721363508.952861209
	
	I0718 21:31:48.894295    7353 fix.go:216] guest clock: 1721363508.952861209
	I0718 21:31:48.894300    7353 fix.go:229] Guest: 2024-07-18 21:31:48.952861209 -0700 PDT Remote: 2024-07-18 21:31:48.832116 -0700 PDT m=+13.655434543 (delta=120.745209ms)
	I0718 21:31:48.894321    7353 fix.go:200] guest clock delta is within tolerance: 120.745209ms
	I0718 21:31:48.894325    7353 start.go:83] releasing machines lock for "false-709000", held for 13.153084208s
	I0718 21:31:48.894344    7353 main.go:141] libmachine: (false-709000) Calling .DriverName
	I0718 21:31:48.894479    7353 main.go:141] libmachine: (false-709000) Calling .GetIP
	I0718 21:31:48.894580    7353 main.go:141] libmachine: (false-709000) Calling .DriverName
	I0718 21:31:48.894893    7353 main.go:141] libmachine: (false-709000) Calling .DriverName
	I0718 21:31:48.895003    7353 main.go:141] libmachine: (false-709000) Calling .DriverName
	I0718 21:31:48.895144    7353 ssh_runner.go:195] Run: cat /version.json
	I0718 21:31:48.895157    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:48.895259    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:48.895342    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:48.895422    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:48.895502    7353 sshutil.go:53] new ssh client: &{IP:192.169.0.39 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/id_rsa Username:docker}
	I0718 21:31:48.895731    7353 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0718 21:31:48.895760    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHHostname
	I0718 21:31:48.895876    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHPort
	I0718 21:31:48.895991    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHKeyPath
	I0718 21:31:48.896157    7353 main.go:141] libmachine: (false-709000) Calling .GetSSHUsername
	I0718 21:31:48.896268    7353 sshutil.go:53] new ssh client: &{IP:192.169.0.39 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/false-709000/id_rsa Username:docker}
	I0718 21:31:48.928508    7353 ssh_runner.go:195] Run: systemctl --version
	I0718 21:31:48.933657    7353 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0718 21:31:48.978164    7353 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0718 21:31:48.978231    7353 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0718 21:31:48.985509    7353 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0718 21:31:48.997967    7353 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0718 21:31:48.997984    7353 start.go:495] detecting cgroup driver to use...
	I0718 21:31:48.998097    7353 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:31:49.015833    7353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0718 21:31:49.028723    7353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0718 21:31:49.038325    7353 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0718 21:31:49.038381    7353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0718 21:31:49.047902    7353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:31:49.057692    7353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0718 21:31:49.067624    7353 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0718 21:31:49.077392    7353 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0718 21:31:49.087178    7353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0718 21:31:49.096757    7353 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0718 21:31:49.106597    7353 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0718 21:31:49.116319    7353 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0718 21:31:49.124465    7353 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0718 21:31:49.132430    7353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:31:49.231501    7353 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0718 21:31:49.250528    7353 start.go:495] detecting cgroup driver to use...
	I0718 21:31:49.250600    7353 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0718 21:31:49.270135    7353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:31:49.286196    7353 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0718 21:31:49.306146    7353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0718 21:31:49.317400    7353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:31:49.328120    7353 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0718 21:31:49.349704    7353 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0718 21:31:49.360468    7353 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0718 21:31:49.376016    7353 ssh_runner.go:195] Run: which cri-dockerd
	I0718 21:31:49.379170    7353 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0718 21:31:49.386832    7353 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0718 21:31:49.401405    7353 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0718 21:31:49.503777    7353 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0718 21:31:49.613578    7353 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0718 21:31:49.613660    7353 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0718 21:31:49.627808    7353 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0718 21:31:49.727056    7353 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0718 21:32:50.755027    7353 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.0261228s)
	I0718 21:32:50.755104    7353 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0718 21:32:50.791097    7353 out.go:177] 
	W0718 21:32:50.811829    7353 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 04:31:47 false-709000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:31:47 false-709000 dockerd[517]: time="2024-07-19T04:31:47.543166338Z" level=info msg="Starting up"
	Jul 19 04:31:47 false-709000 dockerd[517]: time="2024-07-19T04:31:47.543591946Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 04:31:47 false-709000 dockerd[517]: time="2024-07-19T04:31:47.544216378Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=523
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.560679207Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576521501Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576597141Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576665465Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576703457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576790313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576830457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577002810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577049814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577082814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577152011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577238009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577425406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579048117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579103132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579243948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579288483Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579447087Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579522224Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581518866Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581606688Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581655174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581704530Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581750111Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581843489Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582495177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582611706Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582651025Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582666724Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582680621Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582691570Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582703568Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582716770Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582729611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582742626Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582757107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582770326Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582787700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582798243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582810031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582822027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582833618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582845456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582866020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582880575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582892984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582906838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582941059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582955709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582967467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582987307Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583009621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583022234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583033273Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583065480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583103197Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583118606Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583130654Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583138623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583149869Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583193320Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583935098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.584063779Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.584151116Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.584215671Z" level=info msg="containerd successfully booted in 0.024126s"
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.574362447Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.586481564Z" level=info msg="Loading containers: start."
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.678577893Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.764009569Z" level=info msg="Loading containers: done."
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.771813852Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.771935175Z" level=info msg="Daemon has completed initialization"
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.798118345Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 04:31:48 false-709000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.798445615Z" level=info msg="API listen on [::]:2376"
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.799636925Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.800455538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.800817331Z" level=info msg="Daemon shutdown complete"
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.800887289Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.800925276Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 04:31:49 false-709000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 04:31:50 false-709000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 04:31:50 false-709000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:31:50 false-709000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:31:50 false-709000 dockerd[1006]: time="2024-07-19T04:31:50.840810457Z" level=info msg="Starting up"
	Jul 19 04:32:50 false-709000 dockerd[1006]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:32:50 false-709000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:32:50 false-709000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:32:50 false-709000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 19 04:31:47 false-709000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:31:47 false-709000 dockerd[517]: time="2024-07-19T04:31:47.543166338Z" level=info msg="Starting up"
	Jul 19 04:31:47 false-709000 dockerd[517]: time="2024-07-19T04:31:47.543591946Z" level=info msg="containerd not running, starting managed containerd"
	Jul 19 04:31:47 false-709000 dockerd[517]: time="2024-07-19T04:31:47.544216378Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=523
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.560679207Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576521501Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576597141Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576665465Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576703457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576790313Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.576830457Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577002810Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577049814Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577082814Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577152011Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577238009Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.577425406Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579048117Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579103132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579243948Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579288483Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579447087Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.579522224Z" level=info msg="metadata content store policy set" policy=shared
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581518866Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581606688Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581655174Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581704530Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581750111Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.581843489Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582495177Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582611706Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582651025Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582666724Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582680621Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582691570Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582703568Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582716770Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582729611Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582742626Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582757107Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582770326Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582787700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582798243Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582810031Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582822027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582833618Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582845456Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582866020Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582880575Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582892984Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582906838Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582941059Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582955709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582967467Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.582987307Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583009621Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583022234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583033273Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583065480Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583103197Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583118606Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583130654Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583138623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583149869Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583193320Z" level=info msg="NRI interface is disabled by configuration."
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.583935098Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.584063779Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.584151116Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 19 04:31:47 false-709000 dockerd[523]: time="2024-07-19T04:31:47.584215671Z" level=info msg="containerd successfully booted in 0.024126s"
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.574362447Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.586481564Z" level=info msg="Loading containers: start."
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.678577893Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.764009569Z" level=info msg="Loading containers: done."
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.771813852Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.771935175Z" level=info msg="Daemon has completed initialization"
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.798118345Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 19 04:31:48 false-709000 systemd[1]: Started Docker Application Container Engine.
	Jul 19 04:31:48 false-709000 dockerd[517]: time="2024-07-19T04:31:48.798445615Z" level=info msg="API listen on [::]:2376"
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.799636925Z" level=info msg="Processing signal 'terminated'"
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.800455538Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.800817331Z" level=info msg="Daemon shutdown complete"
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.800887289Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 19 04:31:49 false-709000 dockerd[517]: time="2024-07-19T04:31:49.800925276Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 19 04:31:49 false-709000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 19 04:31:50 false-709000 systemd[1]: docker.service: Deactivated successfully.
	Jul 19 04:31:50 false-709000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 19 04:31:50 false-709000 systemd[1]: Starting Docker Application Container Engine...
	Jul 19 04:31:50 false-709000 dockerd[1006]: time="2024-07-19T04:31:50.840810457Z" level=info msg="Starting up"
	Jul 19 04:32:50 false-709000 dockerd[1006]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 19 04:32:50 false-709000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 19 04:32:50 false-709000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 19 04:32:50 false-709000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0718 21:32:50.811906    7353 out.go:239] * 
	* 
	W0718 21:32:50.812547    7353 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0718 21:32:50.874728    7353 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:114: failed start: exit status 90
--- FAIL: TestNetworkPlugins/group/false/Start (75.78s)

                                                
                                    

Test pass (318/345)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 12.23
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.2
12 TestDownloadOnly/v1.30.3/json-events 7.38
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.29
18 TestDownloadOnly/v1.30.3/DeleteAll 0.31
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 15.3
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
30 TestBinaryMirror 0.94
31 TestOffline 62.03
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.17
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 327.8
38 TestAddons/parallel/Registry 14.38
39 TestAddons/parallel/Ingress 18.34
40 TestAddons/parallel/InspektorGadget 10.49
41 TestAddons/parallel/MetricsServer 5.62
42 TestAddons/parallel/HelmTiller 10.88
44 TestAddons/parallel/CSI 45.46
45 TestAddons/parallel/Headlamp 12.94
46 TestAddons/parallel/CloudSpanner 5.37
47 TestAddons/parallel/LocalPath 58.34
48 TestAddons/parallel/NvidiaDevicePlugin 5.35
49 TestAddons/parallel/Yakd 6.01
50 TestAddons/parallel/Volcano 40.17
53 TestAddons/serial/GCPAuth/Namespaces 0.1
54 TestAddons/StoppedEnableDisable 5.92
55 TestCertOptions 46.2
56 TestCertExpiration 249.48
57 TestDockerFlags 50.4
58 TestForceSystemdFlag 43.79
59 TestForceSystemdEnv 43.08
62 TestHyperKitDriverInstallOrUpdate 8.35
65 TestErrorSpam/setup 39.1
66 TestErrorSpam/start 1.4
67 TestErrorSpam/status 0.47
68 TestErrorSpam/pause 1.32
69 TestErrorSpam/unpause 1.34
70 TestErrorSpam/stop 153.84
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 91.17
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 41
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.06
81 TestFunctional/serial/CacheCmd/cache/add_remote 2.96
82 TestFunctional/serial/CacheCmd/cache/add_local 1.35
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.06
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.13
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.45
90 TestFunctional/serial/ExtraConfig 43.15
91 TestFunctional/serial/ComponentHealth 0.05
92 TestFunctional/serial/LogsCmd 2.89
93 TestFunctional/serial/LogsFileCmd 2.74
94 TestFunctional/serial/InvalidService 4.27
96 TestFunctional/parallel/ConfigCmd 0.51
97 TestFunctional/parallel/DashboardCmd 10.25
98 TestFunctional/parallel/DryRun 0.96
99 TestFunctional/parallel/InternationalLanguage 0.59
100 TestFunctional/parallel/StatusCmd 0.52
104 TestFunctional/parallel/ServiceCmdConnect 8.63
105 TestFunctional/parallel/AddonsCmd 0.22
106 TestFunctional/parallel/PersistentVolumeClaim 26.2
108 TestFunctional/parallel/SSHCmd 0.29
109 TestFunctional/parallel/CpCmd 1.07
110 TestFunctional/parallel/MySQL 27.31
111 TestFunctional/parallel/FileSync 0.22
112 TestFunctional/parallel/CertSync 1.07
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.15
120 TestFunctional/parallel/License 0.55
121 TestFunctional/parallel/Version/short 0.1
122 TestFunctional/parallel/Version/components 0.4
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.18
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.15
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.16
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.86
128 TestFunctional/parallel/ImageCommands/Setup 1.93
129 TestFunctional/parallel/DockerEnv/bash 0.61
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.23
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.19
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 0.91
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.63
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.42
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.37
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.36
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.82
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.39
140 TestFunctional/parallel/ServiceCmd/DeployApp 21.13
141 TestFunctional/parallel/ServiceCmd/List 0.19
142 TestFunctional/parallel/ServiceCmd/JSONOutput 0.18
143 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
144 TestFunctional/parallel/ServiceCmd/Format 0.25
145 TestFunctional/parallel/ServiceCmd/URL 0.26
147 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
148 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
150 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.26
158 TestFunctional/parallel/ProfileCmd/profile_list 0.28
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
160 TestFunctional/parallel/MountCmd/any-port 6.4
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.77
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 201.59
170 TestMultiControlPlane/serial/DeployApp 8.74
171 TestMultiControlPlane/serial/PingHostFromPods 1.27
172 TestMultiControlPlane/serial/AddWorkerNode 52.14
173 TestMultiControlPlane/serial/NodeLabels 0.05
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.34
175 TestMultiControlPlane/serial/CopyFile 9.28
176 TestMultiControlPlane/serial/StopSecondaryNode 8.7
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.26
178 TestMultiControlPlane/serial/RestartSecondaryNode 41.19
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.33
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 195.88
181 TestMultiControlPlane/serial/DeleteSecondaryNode 8.11
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.25
183 TestMultiControlPlane/serial/StopCluster 24.97
184 TestMultiControlPlane/serial/RestartCluster 202.69
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.26
186 TestMultiControlPlane/serial/AddSecondaryNode 75.34
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.33
190 TestImageBuild/serial/Setup 40.91
191 TestImageBuild/serial/NormalBuild 1.25
192 TestImageBuild/serial/BuildWithBuildArg 0.48
193 TestImageBuild/serial/BuildWithDockerIgnore 0.24
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.22
198 TestJSONOutput/start/Command 51.44
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.48
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.45
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 8.35
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.57
226 TestMainNoArgs 0.08
227 TestMinikubeProfile 217.41
230 TestMountStart/serial/StartWithMountFirst 21.59
231 TestMountStart/serial/VerifyMountFirst 0.29
232 TestMountStart/serial/StartWithMountSecond 18.4
233 TestMountStart/serial/VerifyMountSecond 0.33
234 TestMountStart/serial/DeleteFirst 2.34
235 TestMountStart/serial/VerifyMountPostDelete 0.3
236 TestMountStart/serial/Stop 2.38
237 TestMountStart/serial/RestartStopped 20.06
238 TestMountStart/serial/VerifyMountPostStop 0.3
241 TestMultiNode/serial/FreshStart2Nodes 115.56
242 TestMultiNode/serial/DeployApp2Nodes 4.13
243 TestMultiNode/serial/PingHostFrom2Pods 0.86
244 TestMultiNode/serial/AddNode 44.48
245 TestMultiNode/serial/MultiNodeLabels 0.05
246 TestMultiNode/serial/ProfileList 0.18
247 TestMultiNode/serial/CopyFile 5.28
248 TestMultiNode/serial/StopNode 2.85
249 TestMultiNode/serial/StartAfterStop 41.7
250 TestMultiNode/serial/RestartKeepsNodes 175.88
251 TestMultiNode/serial/DeleteNode 3.43
252 TestMultiNode/serial/StopMultiNode 16.78
254 TestMultiNode/serial/ValidateNameConflict 43.23
258 TestPreload 176.22
261 TestSkaffold 112.64
264 TestRunningBinaryUpgrade 88.63
266 TestKubernetesUpgrade 121.17
279 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.54
280 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.71
281 TestStoppedBinaryUpgrade/Setup 0.91
282 TestStoppedBinaryUpgrade/Upgrade 114.92
284 TestPause/serial/Start 90.12
285 TestPause/serial/SecondStartNoReconfiguration 38.24
286 TestStoppedBinaryUpgrade/MinikubeLogs 2.84
295 TestNoKubernetes/serial/StartNoK8sWithVersion 0.77
296 TestNoKubernetes/serial/StartWithK8s 40.78
297 TestPause/serial/Pause 0.56
298 TestPause/serial/VerifyStatus 0.16
299 TestPause/serial/Unpause 0.52
300 TestPause/serial/PauseAgain 0.58
301 TestPause/serial/DeletePaused 5.24
302 TestPause/serial/VerifyDeletedResources 0.19
303 TestNetworkPlugins/group/auto/Start 63.88
304 TestNoKubernetes/serial/StartWithStopK8s 13.51
305 TestNoKubernetes/serial/Start 20.87
306 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
307 TestNoKubernetes/serial/ProfileList 0.47
308 TestNoKubernetes/serial/Stop 2.4
310 TestNetworkPlugins/group/auto/KubeletFlags 0.16
311 TestNetworkPlugins/group/auto/NetCatPod 11.15
312 TestNetworkPlugins/group/auto/DNS 0.13
313 TestNetworkPlugins/group/auto/Localhost 0.1
314 TestNetworkPlugins/group/auto/HairPin 0.11
315 TestNetworkPlugins/group/calico/Start 188.44
316 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
317 TestNetworkPlugins/group/custom-flannel/Start 61.59
318 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.16
319 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.13
320 TestNetworkPlugins/group/custom-flannel/DNS 0.13
321 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
322 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
324 TestNetworkPlugins/group/calico/ControllerPod 6.01
325 TestNetworkPlugins/group/calico/KubeletFlags 0.16
326 TestNetworkPlugins/group/calico/NetCatPod 11.14
327 TestNetworkPlugins/group/calico/DNS 0.13
328 TestNetworkPlugins/group/calico/Localhost 0.09
329 TestNetworkPlugins/group/calico/HairPin 0.11
330 TestNetworkPlugins/group/kindnet/Start 70.61
331 TestNetworkPlugins/group/flannel/Start 60.22
332 TestNetworkPlugins/group/kindnet/ControllerPod 6
333 TestNetworkPlugins/group/kindnet/KubeletFlags 0.15
334 TestNetworkPlugins/group/kindnet/NetCatPod 12.13
335 TestNetworkPlugins/group/kindnet/DNS 0.12
336 TestNetworkPlugins/group/kindnet/Localhost 0.1
337 TestNetworkPlugins/group/kindnet/HairPin 0.1
338 TestNetworkPlugins/group/enable-default-cni/Start 207.85
339 TestNetworkPlugins/group/flannel/ControllerPod 6
340 TestNetworkPlugins/group/flannel/KubeletFlags 0.15
341 TestNetworkPlugins/group/flannel/NetCatPod 9.13
342 TestNetworkPlugins/group/flannel/DNS 0.14
343 TestNetworkPlugins/group/flannel/Localhost 0.1
344 TestNetworkPlugins/group/flannel/HairPin 0.11
345 TestNetworkPlugins/group/bridge/Start 89.34
346 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
347 TestNetworkPlugins/group/bridge/NetCatPod 12.14
348 TestNetworkPlugins/group/bridge/DNS 0.13
349 TestNetworkPlugins/group/bridge/Localhost 0.1
350 TestNetworkPlugins/group/bridge/HairPin 0.1
351 TestNetworkPlugins/group/kubenet/Start 52.37
352 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.16
353 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.14
354 TestNetworkPlugins/group/kubenet/KubeletFlags 0.16
355 TestNetworkPlugins/group/kubenet/NetCatPod 12.13
356 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
357 TestNetworkPlugins/group/enable-default-cni/Localhost 0.09
358 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
359 TestNetworkPlugins/group/kubenet/DNS 0.12
360 TestNetworkPlugins/group/kubenet/Localhost 0.1
361 TestNetworkPlugins/group/kubenet/HairPin 0.1
363 TestStartStop/group/old-k8s-version/serial/FirstStart 167.69
365 TestStartStop/group/embed-certs/serial/FirstStart 181.41
366 TestStartStop/group/old-k8s-version/serial/DeployApp 9.36
367 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.82
368 TestStartStop/group/old-k8s-version/serial/Stop 8.46
369 TestStartStop/group/embed-certs/serial/DeployApp 8.22
370 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
371 TestStartStop/group/old-k8s-version/serial/SecondStart 404.1
372 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 0.76
373 TestStartStop/group/embed-certs/serial/Stop 8.41
374 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.32
375 TestStartStop/group/embed-certs/serial/SecondStart 428.3
376 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
377 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
378 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.16
379 TestStartStop/group/old-k8s-version/serial/Pause 1.86
381 TestStartStop/group/no-preload/serial/FirstStart 59.47
382 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 6
383 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.06
384 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.16
385 TestStartStop/group/embed-certs/serial/Pause 1.99
387 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 54.59
388 TestStartStop/group/no-preload/serial/DeployApp 8.22
389 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.75
390 TestStartStop/group/no-preload/serial/Stop 8.54
391 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.31
392 TestStartStop/group/no-preload/serial/SecondStart 315.64
393 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.22
394 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.81
395 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.41
396 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
397 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 408.26
398 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6.01
399 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
400 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
401 TestStartStop/group/no-preload/serial/Pause 1.93
403 TestStartStop/group/newest-cni/serial/FirstStart 41.48
404 TestStartStop/group/newest-cni/serial/DeployApp 0
405 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.73
406 TestStartStop/group/newest-cni/serial/Stop 8.44
407 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.32
408 TestStartStop/group/newest-cni/serial/SecondStart 29.32
409 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
410 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
411 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.16
412 TestStartStop/group/newest-cni/serial/Pause 1.79
413 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
414 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
415 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.16
416 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.03
x
+
TestDownloadOnly/v1.20.0/json-events (12.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-770000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-770000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (12.226171759s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (12.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-770000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-770000: exit status 85 (286.014392ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-770000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |          |
	|         | -p download-only-770000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:25:07
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:25:07.092924    1950 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:25:07.093124    1950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:07.093130    1950 out.go:304] Setting ErrFile to fd 2...
	I0718 20:25:07.093134    1950 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:07.093290    1950 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	W0718 20:25:07.093915    1950 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19302-1411/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19302-1411/.minikube/config/config.json: no such file or directory
	I0718 20:25:07.095942    1950 out.go:298] Setting JSON to true
	I0718 20:25:07.118322    1950 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1481,"bootTime":1721358026,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0718 20:25:07.118413    1950 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:25:07.140071    1950 out.go:97] [download-only-770000] minikube v1.33.1 on Darwin 14.5
	I0718 20:25:07.140274    1950 notify.go:220] Checking for updates...
	W0718 20:25:07.140283    1950 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball: no such file or directory
	I0718 20:25:07.162011    1950 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:25:07.183141    1950 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 20:25:07.205057    1950 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:25:07.226894    1950 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:25:07.248087    1950 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	W0718 20:25:07.289792    1950 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:25:07.290295    1950 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:25:07.346064    1950 out.go:97] Using the hyperkit driver based on user configuration
	I0718 20:25:07.346122    1950 start.go:297] selected driver: hyperkit
	I0718 20:25:07.346134    1950 start.go:901] validating driver "hyperkit" against <nil>
	I0718 20:25:07.346372    1950 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:25:07.346783    1950 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1411/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0718 20:25:07.751788    1950 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0718 20:25:07.756885    1950 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:25:07.756906    1950 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0718 20:25:07.756931    1950 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:25:07.761523    1950 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0718 20:25:07.761985    1950 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:25:07.762032    1950 cni.go:84] Creating CNI manager for ""
	I0718 20:25:07.762050    1950 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0718 20:25:07.762118    1950 start.go:340] cluster config:
	{Name:download-only-770000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-770000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:25:07.762345    1950 iso.go:125] acquiring lock: {Name:mka3a56e9fb30ac1fad44235cb5c998fd919cd8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:25:07.783477    1950 out.go:97] Downloading VM boot image ...
	I0718 20:25:07.783540    1950 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0718 20:25:12.057926    1950 out.go:97] Starting "download-only-770000" primary control-plane node in "download-only-770000" cluster
	I0718 20:25:12.057967    1950 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:25:12.118400    1950 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0718 20:25:12.118459    1950 cache.go:56] Caching tarball of preloaded images
	I0718 20:25:12.118827    1950 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0718 20:25:12.140397    1950 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0718 20:25:12.140426    1950 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:12.220542    1950 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-770000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-770000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.2s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-770000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.20s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (7.38s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-057000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-057000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit : (7.381549334s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (7.38s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-057000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-057000: exit status 85 (291.306211ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-770000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-770000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| delete  | -p download-only-770000        | download-only-770000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| start   | -o=json --download-only        | download-only-057000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-057000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:25:20
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:25:20.046667    1978 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:25:20.046938    1978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:20.046943    1978 out.go:304] Setting ErrFile to fd 2...
	I0718 20:25:20.046947    1978 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:20.047110    1978 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 20:25:20.048579    1978 out.go:298] Setting JSON to true
	I0718 20:25:20.070907    1978 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1494,"bootTime":1721358026,"procs":441,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0718 20:25:20.071002    1978 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:25:20.092881    1978 out.go:97] [download-only-057000] minikube v1.33.1 on Darwin 14.5
	I0718 20:25:20.093112    1978 notify.go:220] Checking for updates...
	I0718 20:25:20.114504    1978 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:25:20.135600    1978 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 20:25:20.156761    1978 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:25:20.198585    1978 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:25:20.219859    1978 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	W0718 20:25:20.261424    1978 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:25:20.261954    1978 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:25:20.292599    1978 out.go:97] Using the hyperkit driver based on user configuration
	I0718 20:25:20.292671    1978 start.go:297] selected driver: hyperkit
	I0718 20:25:20.292683    1978 start.go:901] validating driver "hyperkit" against <nil>
	I0718 20:25:20.292884    1978 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:25:20.293125    1978 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1411/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0718 20:25:20.302912    1978 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0718 20:25:20.306781    1978 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:25:20.306801    1978 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0718 20:25:20.306825    1978 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:25:20.309494    1978 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0718 20:25:20.309651    1978 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:25:20.309673    1978 cni.go:84] Creating CNI manager for ""
	I0718 20:25:20.309688    1978 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 20:25:20.309697    1978 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 20:25:20.309762    1978 start.go:340] cluster config:
	{Name:download-only-057000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-057000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:25:20.309853    1978 iso.go:125] acquiring lock: {Name:mka3a56e9fb30ac1fad44235cb5c998fd919cd8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:25:20.330622    1978 out.go:97] Starting "download-only-057000" primary control-plane node in "download-only-057000" cluster
	I0718 20:25:20.330657    1978 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:25:20.383965    1978 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0718 20:25:20.384007    1978 cache.go:56] Caching tarball of preloaded images
	I0718 20:25:20.384375    1978 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0718 20:25:20.406130    1978 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0718 20:25:20.406158    1978 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:20.487558    1978 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	
	
	* The control-plane node download-only-057000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-057000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-057000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (15.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-330000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-330000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit : (15.296758242s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (15.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-330000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-330000: exit status 85 (293.410887ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-770000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-770000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| delete  | -p download-only-770000             | download-only-770000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| start   | -o=json --download-only             | download-only-057000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-057000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| delete  | -p download-only-057000             | download-only-057000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT | 18 Jul 24 20:25 PDT |
	| start   | -o=json --download-only             | download-only-330000 | jenkins | v1.33.1 | 18 Jul 24 20:25 PDT |                     |
	|         | -p download-only-330000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/18 20:25:28
	Running on machine: MacOS-Agent-4
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0718 20:25:28.235891    2002 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:25:28.236163    2002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:28.236169    2002 out.go:304] Setting ErrFile to fd 2...
	I0718 20:25:28.236173    2002 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:25:28.236355    2002 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 20:25:28.237764    2002 out.go:298] Setting JSON to true
	I0718 20:25:28.259817    2002 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":1502,"bootTime":1721358026,"procs":439,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0718 20:25:28.259909    2002 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:25:28.281065    2002 out.go:97] [download-only-330000] minikube v1.33.1 on Darwin 14.5
	I0718 20:25:28.281295    2002 notify.go:220] Checking for updates...
	I0718 20:25:28.303129    2002 out.go:169] MINIKUBE_LOCATION=19302
	I0718 20:25:28.324750    2002 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 20:25:28.345933    2002 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:25:28.367098    2002 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:25:28.389086    2002 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	W0718 20:25:28.431259    2002 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0718 20:25:28.431744    2002 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:25:28.462121    2002 out.go:97] Using the hyperkit driver based on user configuration
	I0718 20:25:28.462205    2002 start.go:297] selected driver: hyperkit
	I0718 20:25:28.462217    2002 start.go:901] validating driver "hyperkit" against <nil>
	I0718 20:25:28.462433    2002 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:25:28.462672    2002 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19302-1411/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0718 20:25:28.472561    2002 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0718 20:25:28.476259    2002 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:25:28.476279    2002 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0718 20:25:28.476312    2002 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0718 20:25:28.478856    2002 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0718 20:25:28.479016    2002 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0718 20:25:28.479038    2002 cni.go:84] Creating CNI manager for ""
	I0718 20:25:28.479054    2002 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0718 20:25:28.479061    2002 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0718 20:25:28.479124    2002 start.go:340] cluster config:
	{Name:download-only-330000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-330000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:25:28.479206    2002 iso.go:125] acquiring lock: {Name:mka3a56e9fb30ac1fad44235cb5c998fd919cd8d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0718 20:25:28.500948    2002 out.go:97] Starting "download-only-330000" primary control-plane node in "download-only-330000" cluster
	I0718 20:25:28.500983    2002 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 20:25:28.554101    2002 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0718 20:25:28.554118    2002 cache.go:56] Caching tarball of preloaded images
	I0718 20:25:28.554370    2002 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 20:25:28.576091    2002 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0718 20:25:28.576119    2002 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:28.659624    2002 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0718 20:25:32.961501    2002 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:32.961748    2002 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0718 20:25:33.426617    2002 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0718 20:25:33.426858    2002 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/download-only-330000/config.json ...
	I0718 20:25:33.426882    2002 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/download-only-330000/config.json: {Name:mka4c9ffdbdd0a6fbc991958154b94404bce688d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0718 20:25:33.427234    2002 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0718 20:25:33.427439    2002 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19302-1411/.minikube/cache/darwin/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-330000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-330000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-330000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.94s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-104000 --alsologtostderr --binary-mirror http://127.0.0.1:49640 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-104000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-104000
--- PASS: TestBinaryMirror (0.94s)

                                                
                                    
x
+
TestOffline (62.03s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-524000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-524000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (56.742780285s)
helpers_test.go:175: Cleaning up "offline-docker-524000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-524000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-524000: (5.291754383s)
--- PASS: TestOffline (62.03s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-719000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-719000: exit status 85 (166.795612ms)

                                                
                                                
-- stdout --
	* Profile "addons-719000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-719000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.17s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-719000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-719000: exit status 85 (187.518252ms)

                                                
                                                
-- stdout --
	* Profile "addons-719000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-719000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (327.8s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-719000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-719000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (5m27.797530227s)
--- PASS: TestAddons/Setup (327.80s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 10.404245ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-7fr4v" [961f5494-43b4-462a-8428-6871a380d2f7] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003483305s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-7ctw5" [6357fe21-5235-4eee-b4c0-16e56944af5f] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004484818s
addons_test.go:342: (dbg) Run:  kubectl --context addons-719000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-719000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-719000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.695966589s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 ip
2024/07/18 20:31:27 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.38s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (18.34s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-719000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-719000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-719000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [ee12317b-7f6b-4cd5-9412-f21f6fee5c11] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [ee12317b-7f6b-4cd5-9412-f21f6fee5c11] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 9.004245283s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-719000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-719000 addons disable ingress --alsologtostderr -v=1: (7.438034118s)
--- PASS: TestAddons/parallel/Ingress (18.34s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.49s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-p5p8v" [d1dcf484-770a-4599-a8fd-715c6c6230c9] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.007163205s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-719000
addons_test.go:843: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-719000: (5.47829998s)
--- PASS: TestAddons/parallel/InspektorGadget (10.49s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.62s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 2.096106ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-gjc8k" [1cd49245-14db-4715-baf3-1fc5f562fb0e] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004112283s
addons_test.go:417: (dbg) Run:  kubectl --context addons-719000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.62s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.88s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.961902ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-nrqf5" [97a5c4b7-0c22-4e46-b729-c348724bf592] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004663163s
addons_test.go:475: (dbg) Run:  kubectl --context addons-719000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-719000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (5.459590524s)
addons_test.go:480: kubectl --context addons-719000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: unexpected stderr: Unable to use a TTY - input is not a terminal or the right kind of file
If you don't see a command prompt, try pressing enter.
warning: couldn't attach to pod/helm-test, falling back to streaming logs: 
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.88s)

                                                
                                    
x
+
TestAddons/parallel/CSI (45.46s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 3.570649ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-719000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-719000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [a2abad8b-4e10-4343-a016-7305e04cd174] Pending
helpers_test.go:344: "task-pv-pod" [a2abad8b-4e10-4343-a016-7305e04cd174] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [a2abad8b-4e10-4343-a016-7305e04cd174] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 7.004546821s
addons_test.go:586: (dbg) Run:  kubectl --context addons-719000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-719000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-719000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-719000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-719000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-719000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-719000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [de10f88d-78ed-4000-a4e8-df641d5b1a26] Pending
helpers_test.go:344: "task-pv-pod-restore" [de10f88d-78ed-4000-a4e8-df641d5b1a26] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [de10f88d-78ed-4000-a4e8-df641d5b1a26] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 8.005590941s
addons_test.go:628: (dbg) Run:  kubectl --context addons-719000 delete pod task-pv-pod-restore
addons_test.go:632: (dbg) Run:  kubectl --context addons-719000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-719000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-amd64 -p addons-719000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.376253128s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (45.46s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.94s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-719000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-tv9bm" [185018eb-4b66-4174-903d-4fed84ff0758] Pending
helpers_test.go:344: "headlamp-7867546754-tv9bm" [185018eb-4b66-4174-903d-4fed84ff0758] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-tv9bm" [185018eb-4b66-4174-903d-4fed84ff0758] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.005192552s
--- PASS: TestAddons/parallel/Headlamp (12.94s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.37s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-sbkp6" [79a280d6-f261-4160-9f19-b9b7f51a4bcf] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.003283374s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-719000
--- PASS: TestAddons/parallel/CloudSpanner (5.37s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.34s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-719000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-719000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-719000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5cb1188b-8a89-48d1-a7c5-890300b42f2d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5cb1188b-8a89-48d1-a7c5-890300b42f2d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5cb1188b-8a89-48d1-a7c5-890300b42f2d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.004640328s
addons_test.go:992: (dbg) Run:  kubectl --context addons-719000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 ssh "cat /opt/local-path-provisioner/pvc-3827379d-7283-4c53-a820-a6be997d5eb9_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-719000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-719000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-amd64 -p addons-719000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.69626447s)
--- PASS: TestAddons/parallel/LocalPath (58.34s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-frcqp" [975fdd25-c63f-4502-8f11-c96d6827989f] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.006089445s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-719000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.35s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-qpqb2" [73423b47-61ff-4065-9513-ecf864350a1f] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004510458s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (40.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:897: volcano-admission stabilized in 1.523696ms
addons_test.go:889: volcano-scheduler stabilized in 1.551316ms
addons_test.go:905: volcano-controller stabilized in 1.841526ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-dmth6" [f7cf1f20-85b8-4195-a039-7794c2ce7fc7] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 6.003887325s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-c42jg" [2e2e93cd-a2a3-4fa1-b821-29405b02dd1e] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.002694794s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-zbw55" [343474eb-869e-4b7a-ae64-f55c772323a7] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.003415203s
addons_test.go:924: (dbg) Run:  kubectl --context addons-719000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-719000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-719000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [03c8a1b4-f961-44d7-b222-64b7f48ecf99] Pending
helpers_test.go:344: "test-job-nginx-0" [03c8a1b4-f961-44d7-b222-64b7f48ecf99] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [03c8a1b4-f961-44d7-b222-64b7f48ecf99] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 14.003444784s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-amd64 -p addons-719000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-amd64 -p addons-719000 addons disable volcano --alsologtostderr -v=1: (9.928247868s)
--- PASS: TestAddons/parallel/Volcano (40.17s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-719000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-719000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.92s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-719000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-719000: (5.376890678s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-719000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-719000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-719000
--- PASS: TestAddons/StoppedEnableDisable (5.92s)

                                                
                                    
x
+
TestCertOptions (46.2s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-159000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-159000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (40.625464548s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-159000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-159000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-159000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-159000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-159000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-159000: (5.23929512s)
--- PASS: TestCertOptions (46.20s)

                                                
                                    
x
+
TestCertExpiration (249.48s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-704000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
E0718 21:21:13.857418    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-704000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (35.583804168s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-704000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0718 21:24:47.222711    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-704000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (28.657073522s)
helpers_test.go:175: Cleaning up "cert-expiration-704000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-704000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-704000: (5.239190475s)
--- PASS: TestCertExpiration (249.48s)

                                                
                                    
x
+
TestDockerFlags (50.4s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-337000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-337000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (46.672118335s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-337000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-337000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-337000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-337000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-337000: (3.388649201s)
--- PASS: TestDockerFlags (50.40s)

                                                
                                    
x
+
TestForceSystemdFlag (43.79s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-154000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-154000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (38.378159387s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-154000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-154000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-154000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-154000: (5.226010224s)
--- PASS: TestForceSystemdFlag (43.79s)

                                                
                                    
x
+
TestForceSystemdEnv (43.08s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-867000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
E0718 21:20:10.554237    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-867000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (39.504327691s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-867000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-867000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-867000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-867000: (3.412261614s)
--- PASS: TestForceSystemdEnv (43.08s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.35s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.35s)

                                                
                                    
x
+
TestErrorSpam/setup (39.1s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-064000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-064000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 --driver=hyperkit : (39.103782457s)
--- PASS: TestErrorSpam/setup (39.10s)

                                                
                                    
x
+
TestErrorSpam/start (1.4s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 start --dry-run
--- PASS: TestErrorSpam/start (1.40s)

                                                
                                    
x
+
TestErrorSpam/status (0.47s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 status
--- PASS: TestErrorSpam/status (0.47s)

                                                
                                    
x
+
TestErrorSpam/pause (1.32s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 pause
--- PASS: TestErrorSpam/pause (1.32s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.34s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 unpause
--- PASS: TestErrorSpam/unpause (1.34s)

                                                
                                    
x
+
TestErrorSpam/stop (153.84s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 stop: (3.388719333s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 stop: (1m15.222311902s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 stop
E0718 20:36:13.765577    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:13.773121    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:13.783386    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:13.803860    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:13.846050    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:13.926853    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:14.087445    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:14.407658    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:15.049922    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:16.332228    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:18.893481    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:24.015902    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:36:34.258369    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-064000 --log_dir /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/nospam-064000 stop: (1m15.225276048s)
--- PASS: TestErrorSpam/stop (153.84s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19302-1411/.minikube/files/etc/test/nested/copy/1948/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (91.17s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-345000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0718 20:36:54.739088    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:37:35.700279    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-345000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (1m31.168819455s)
--- PASS: TestFunctional/serial/StartWithProxy (91.17s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-345000 --alsologtostderr -v=8
E0718 20:38:57.622363    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-345000 --alsologtostderr -v=8: (41.000523469s)
functional_test.go:659: soft start took 41.001046147s for "functional-345000" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.00s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-345000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-345000 cache add registry.k8s.io/pause:3.1: (1.070488801s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (2.96s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialCacheCmdcacheadd_local360747974/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 cache add minikube-local-cache-test:functional-345000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 cache delete minikube-local-cache-test:functional-345000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-345000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.35s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (147.725854ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 kubectl -- --context functional-345000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-345000 kubectl -- --context functional-345000 get pods: (1.127168373s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.13s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.45s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-345000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-345000 get pods: (1.444265054s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.45s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (43.15s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-345000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-345000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (43.148378659s)
functional_test.go:757: restart took 43.148486852s for "functional-345000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (43.15s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-345000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.89s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-345000 logs: (2.887943451s)
--- PASS: TestFunctional/serial/LogsCmd (2.89s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.74s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3606947019/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-345000 logs --file /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalserialLogsFileCmd3606947019/001/logs.txt: (2.73467588s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.74s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.27s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-345000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-345000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-345000: exit status 115 (266.454581ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:31827 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-345000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.27s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 config get cpus: exit status 14 (71.235681ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 config get cpus: exit status 14 (55.056276ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.51s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (10.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-345000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-345000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3440: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (10.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-345000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-345000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (514.436066ms)

                                                
                                                
-- stdout --
	* [functional-345000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:41:03.875763    3380 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:41:03.875948    3380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:41:03.875954    3380 out.go:304] Setting ErrFile to fd 2...
	I0718 20:41:03.875958    3380 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:41:03.876133    3380 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 20:41:03.877568    3380 out.go:298] Setting JSON to false
	I0718 20:41:03.900199    3380 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2437,"bootTime":1721358026,"procs":463,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0718 20:41:03.900297    3380 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:41:03.923505    3380 out.go:177] * [functional-345000] minikube v1.33.1 on Darwin 14.5
	I0718 20:41:03.965229    3380 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:41:03.965343    3380 notify.go:220] Checking for updates...
	I0718 20:41:04.007869    3380 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 20:41:04.028989    3380 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:41:04.050292    3380 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:41:04.071122    3380 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	I0718 20:41:04.092157    3380 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:41:04.113490    3380 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:41:04.113973    3380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:41:04.114050    3380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:41:04.123162    3380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50768
	I0718 20:41:04.123556    3380 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:41:04.123977    3380 main.go:141] libmachine: Using API Version  1
	I0718 20:41:04.123986    3380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:41:04.124208    3380 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:41:04.124326    3380 main.go:141] libmachine: (functional-345000) Calling .DriverName
	I0718 20:41:04.124545    3380 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:41:04.124782    3380 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:41:04.124808    3380 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:41:04.133278    3380 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50770
	I0718 20:41:04.133609    3380 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:41:04.133911    3380 main.go:141] libmachine: Using API Version  1
	I0718 20:41:04.133921    3380 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:41:04.134127    3380 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:41:04.134235    3380 main.go:141] libmachine: (functional-345000) Calling .DriverName
	I0718 20:41:04.163203    3380 out.go:177] * Using the hyperkit driver based on existing profile
	I0718 20:41:04.205185    3380 start.go:297] selected driver: hyperkit
	I0718 20:41:04.205211    3380 start.go:901] validating driver "hyperkit" against &{Name:functional-345000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-345000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:41:04.205401    3380 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:41:04.231084    3380 out.go:177] 
	W0718 20:41:04.251998    3380 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0718 20:41:04.272974    3380 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-345000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.96s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-345000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-345000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (591.626521ms)

                                                
                                                
-- stdout --
	* [functional-345000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:41:04.832414    3396 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:41:04.832662    3396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:41:04.832667    3396 out.go:304] Setting ErrFile to fd 2...
	I0718 20:41:04.832671    3396 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:41:04.832876    3396 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 20:41:04.834455    3396 out.go:298] Setting JSON to false
	I0718 20:41:04.857963    3396 start.go:129] hostinfo: {"hostname":"MacOS-Agent-4.local","uptime":2438,"bootTime":1721358026,"procs":467,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"f2f27e25-cfda-5ffd-9706-e98286194e62"}
	W0718 20:41:04.858054    3396 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0718 20:41:04.880032    3396 out.go:177] * [functional-345000] minikube v1.33.1 sur Darwin 14.5
	I0718 20:41:04.952816    3396 notify.go:220] Checking for updates...
	I0718 20:41:04.989863    3396 out.go:177]   - MINIKUBE_LOCATION=19302
	I0718 20:41:05.047909    3396 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	I0718 20:41:05.089847    3396 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0718 20:41:05.132145    3396 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0718 20:41:05.152813    3396 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	I0718 20:41:05.173683    3396 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0718 20:41:05.195699    3396 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:41:05.196347    3396 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:41:05.196418    3396 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:41:05.206054    3396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50786
	I0718 20:41:05.206415    3396 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:41:05.206831    3396 main.go:141] libmachine: Using API Version  1
	I0718 20:41:05.206842    3396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:41:05.207078    3396 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:41:05.207196    3396 main.go:141] libmachine: (functional-345000) Calling .DriverName
	I0718 20:41:05.207383    3396 driver.go:392] Setting default libvirt URI to qemu:///system
	I0718 20:41:05.207622    3396 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:41:05.207644    3396 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:41:05.216298    3396 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50788
	I0718 20:41:05.216676    3396 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:41:05.217014    3396 main.go:141] libmachine: Using API Version  1
	I0718 20:41:05.217029    3396 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:41:05.217242    3396 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:41:05.217364    3396 main.go:141] libmachine: (functional-345000) Calling .DriverName
	I0718 20:41:05.248843    3396 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0718 20:41:05.269899    3396 start.go:297] selected driver: hyperkit
	I0718 20:41:05.269915    3396 start.go:901] validating driver "hyperkit" against &{Name:functional-345000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-345000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0718 20:41:05.270022    3396 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0718 20:41:05.293866    3396 out.go:177] 
	W0718 20:41:05.315032    3396 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0718 20:41:05.335874    3396 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.59s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-345000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-345000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-jjsh6" [7da753ad-faed-4b8d-ba27-2d1888050074] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-jjsh6" [7da753ad-faed-4b8d-ba27-2d1888050074] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.00548012s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.4:31563
functional_test.go:1671: http://192.169.0.4:31563: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-jjsh6

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:31563
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.63s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (26.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [6a71c976-d46f-4980-af91-50de5eaf1e33] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.003969729s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-345000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-345000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-345000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-345000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [2b110f94-90b5-43a2-a304-f2892f39ba1e] Pending
helpers_test.go:344: "sp-pod" [2b110f94-90b5-43a2-a304-f2892f39ba1e] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [2b110f94-90b5-43a2-a304-f2892f39ba1e] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 12.004169173s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-345000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-345000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-345000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [569d1780-4fbc-4f42-bd8f-801146d64f94] Pending
helpers_test.go:344: "sp-pod" [569d1780-4fbc-4f42-bd8f-801146d64f94] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [569d1780-4fbc-4f42-bd8f-801146d64f94] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 7.005465017s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-345000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (26.20s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh -n functional-345000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 cp functional-345000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelCpCmd2177546748/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh -n functional-345000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh -n functional-345000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (27.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-345000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-mdjzh" [8f7c41a9-824b-4d79-b3c0-e66ea86ff225] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-mdjzh" [8f7c41a9-824b-4d79-b3c0-e66ea86ff225] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.003227099s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-345000 exec mysql-64454c8b5c-mdjzh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-345000 exec mysql-64454c8b5c-mdjzh -- mysql -ppassword -e "show databases;": exit status 1 (181.315895ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-345000 exec mysql-64454c8b5c-mdjzh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-345000 exec mysql-64454c8b5c-mdjzh -- mysql -ppassword -e "show databases;": exit status 1 (123.406037ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-345000 exec mysql-64454c8b5c-mdjzh -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-345000 exec mysql-64454c8b5c-mdjzh -- mysql -ppassword -e "show databases;": exit status 1 (136.892494ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-345000 exec mysql-64454c8b5c-mdjzh -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (27.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1948/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo cat /etc/test/nested/copy/1948/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1948.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo cat /etc/ssl/certs/1948.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1948.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo cat /usr/share/ca-certificates/1948.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/19482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo cat /etc/ssl/certs/19482.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/19482.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo cat /usr/share/ca-certificates/19482.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.07s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-345000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "sudo systemctl is-active crio": exit status 1 (147.665431ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-345000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-345000
docker.io/kubernetesui/metrics-scraper:<none>
docker.io/kubernetesui/dashboard:<none>
docker.io/kicbase/echo-server:functional-345000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-345000 image ls --format short --alsologtostderr:
I0718 20:41:16.817560    3486 out.go:291] Setting OutFile to fd 1 ...
I0718 20:41:16.817841    3486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:16.817846    3486 out.go:304] Setting ErrFile to fd 2...
I0718 20:41:16.817850    3486 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:16.818031    3486 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
I0718 20:41:16.818613    3486 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:16.818705    3486 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:16.819056    3486 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:16.819100    3486 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:16.827254    3486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50916
I0718 20:41:16.827693    3486 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:16.828111    3486 main.go:141] libmachine: Using API Version  1
I0718 20:41:16.828142    3486 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:16.828378    3486 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:16.828503    3486 main.go:141] libmachine: (functional-345000) Calling .GetState
I0718 20:41:16.828595    3486 main.go:141] libmachine: (functional-345000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0718 20:41:16.828659    3486 main.go:141] libmachine: (functional-345000) DBG | hyperkit pid from json: 2790
I0718 20:41:16.829885    3486 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:16.829914    3486 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:16.838199    3486 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50918
I0718 20:41:16.838530    3486 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:16.838914    3486 main.go:141] libmachine: Using API Version  1
I0718 20:41:16.838944    3486 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:16.839140    3486 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:16.839248    3486 main.go:141] libmachine: (functional-345000) Calling .DriverName
I0718 20:41:16.839429    3486 ssh_runner.go:195] Run: systemctl --version
I0718 20:41:16.839448    3486 main.go:141] libmachine: (functional-345000) Calling .GetSSHHostname
I0718 20:41:16.839532    3486 main.go:141] libmachine: (functional-345000) Calling .GetSSHPort
I0718 20:41:16.839608    3486 main.go:141] libmachine: (functional-345000) Calling .GetSSHKeyPath
I0718 20:41:16.839720    3486 main.go:141] libmachine: (functional-345000) Calling .GetSSHUsername
I0718 20:41:16.839799    3486 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/functional-345000/id_rsa Username:docker}
I0718 20:41:16.880179    3486 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0718 20:41:16.904277    3486 main.go:141] libmachine: Making call to close driver server
I0718 20:41:16.904286    3486 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:16.904452    3486 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:16.904463    3486 main.go:141] libmachine: Making call to close connection to plugin binary
I0718 20:41:16.904471    3486 main.go:141] libmachine: Making call to close driver server
I0718 20:41:16.904475    3486 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:16.904481    3486 main.go:141] libmachine: (functional-345000) DBG | Closing plugin on server side
I0718 20:41:16.904611    3486 main.go:141] libmachine: (functional-345000) DBG | Closing plugin on server side
I0718 20:41:16.904620    3486 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:16.904632    3486 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-345000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/localhost/my-image                | functional-345000 | d23917f810952 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-345000 | 87d8d710bd624 | 30B    |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| docker.io/kubernetesui/dashboard            | <none>            | 07655ddf2eebe | 246MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| docker.io/kicbase/echo-server               | functional-345000 | 9056ab77afb8e | 4.94MB |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-345000 image ls --format table --alsologtostderr:
I0718 20:41:19.165605    3511 out.go:291] Setting OutFile to fd 1 ...
I0718 20:41:19.165798    3511 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:19.165804    3511 out.go:304] Setting ErrFile to fd 2...
I0718 20:41:19.165807    3511 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:19.166571    3511 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
I0718 20:41:19.167460    3511 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:19.167556    3511 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:19.167913    3511 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:19.167964    3511 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:19.176197    3511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50948
I0718 20:41:19.176622    3511 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:19.177041    3511 main.go:141] libmachine: Using API Version  1
I0718 20:41:19.177055    3511 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:19.177253    3511 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:19.177369    3511 main.go:141] libmachine: (functional-345000) Calling .GetState
I0718 20:41:19.177466    3511 main.go:141] libmachine: (functional-345000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0718 20:41:19.177524    3511 main.go:141] libmachine: (functional-345000) DBG | hyperkit pid from json: 2790
I0718 20:41:19.178766    3511 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:19.178786    3511 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:19.187250    3511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50950
I0718 20:41:19.187610    3511 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:19.187920    3511 main.go:141] libmachine: Using API Version  1
I0718 20:41:19.187937    3511 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:19.188140    3511 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:19.188249    3511 main.go:141] libmachine: (functional-345000) Calling .DriverName
I0718 20:41:19.188408    3511 ssh_runner.go:195] Run: systemctl --version
I0718 20:41:19.188426    3511 main.go:141] libmachine: (functional-345000) Calling .GetSSHHostname
I0718 20:41:19.188500    3511 main.go:141] libmachine: (functional-345000) Calling .GetSSHPort
I0718 20:41:19.188584    3511 main.go:141] libmachine: (functional-345000) Calling .GetSSHKeyPath
I0718 20:41:19.188680    3511 main.go:141] libmachine: (functional-345000) Calling .GetSSHUsername
I0718 20:41:19.188760    3511 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/functional-345000/id_rsa Username:docker}
I0718 20:41:19.225065    3511 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0718 20:41:19.242692    3511 main.go:141] libmachine: Making call to close driver server
I0718 20:41:19.242700    3511 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:19.242851    3511 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:19.242862    3511 main.go:141] libmachine: Making call to close connection to plugin binary
I0718 20:41:19.242869    3511 main.go:141] libmachine: Making call to close driver server
I0718 20:41:19.242874    3511 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:19.243036    3511 main.go:141] libmachine: (functional-345000) DBG | Closing plugin on server side
I0718 20:41:19.243076    3511 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:19.243092    3511 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-345000 image ls --format json --alsologtostderr:
[{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"87d8d710bd62492920ee7ff454605c22c6db668b41a0830f91ea31979e3f7977","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-345000"],"size":"30"},{"id":"76932a3b37d7eb138c8f47c9a2b42
18f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-345000"],"size":"4940000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","re
poDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"d23917f8109522fbe25aae51ba1eb8827bafd5181eba0fe8ec5a19f37cfa5efc","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-345000"],"size":"1240000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"74400
0"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":[],"repoTags":["docker.io/kubernetesui/dashboard:\u003cnone\u003e"],"size":"246000000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-345000 image ls --format json --alsologtostderr:
I0718 20:41:19.012272    3507 out.go:291] Setting OutFile to fd 1 ...
I0718 20:41:19.012499    3507 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:19.012505    3507 out.go:304] Setting ErrFile to fd 2...
I0718 20:41:19.012509    3507 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:19.012675    3507 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
I0718 20:41:19.013260    3507 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:19.013356    3507 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:19.013705    3507 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:19.013746    3507 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:19.021906    3507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50943
I0718 20:41:19.022275    3507 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:19.022776    3507 main.go:141] libmachine: Using API Version  1
I0718 20:41:19.022799    3507 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:19.022999    3507 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:19.023115    3507 main.go:141] libmachine: (functional-345000) Calling .GetState
I0718 20:41:19.023225    3507 main.go:141] libmachine: (functional-345000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0718 20:41:19.023272    3507 main.go:141] libmachine: (functional-345000) DBG | hyperkit pid from json: 2790
I0718 20:41:19.024499    3507 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:19.024536    3507 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:19.032913    3507 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50945
I0718 20:41:19.033253    3507 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:19.033647    3507 main.go:141] libmachine: Using API Version  1
I0718 20:41:19.033669    3507 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:19.033877    3507 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:19.033987    3507 main.go:141] libmachine: (functional-345000) Calling .DriverName
I0718 20:41:19.034167    3507 ssh_runner.go:195] Run: systemctl --version
I0718 20:41:19.034189    3507 main.go:141] libmachine: (functional-345000) Calling .GetSSHHostname
I0718 20:41:19.034282    3507 main.go:141] libmachine: (functional-345000) Calling .GetSSHPort
I0718 20:41:19.034364    3507 main.go:141] libmachine: (functional-345000) Calling .GetSSHKeyPath
I0718 20:41:19.034439    3507 main.go:141] libmachine: (functional-345000) Calling .GetSSHUsername
I0718 20:41:19.034528    3507 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/functional-345000/id_rsa Username:docker}
I0718 20:41:19.069863    3507 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0718 20:41:19.086764    3507 main.go:141] libmachine: Making call to close driver server
I0718 20:41:19.086775    3507 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:19.086921    3507 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:19.086932    3507 main.go:141] libmachine: Making call to close connection to plugin binary
I0718 20:41:19.086939    3507 main.go:141] libmachine: Making call to close driver server
I0718 20:41:19.086939    3507 main.go:141] libmachine: (functional-345000) DBG | Closing plugin on server side
I0718 20:41:19.086946    3507 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:19.087086    3507 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:19.087093    3507 main.go:141] libmachine: (functional-345000) DBG | Closing plugin on server side
I0718 20:41:19.087095    3507 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-345000 image ls --format yaml --alsologtostderr:
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests: []
repoTags:
- docker.io/kubernetesui/dashboard:<none>
size: "246000000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests: []
repoTags:
- docker.io/kubernetesui/metrics-scraper:<none>
size: "43800000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 87d8d710bd62492920ee7ff454605c22c6db668b41a0830f91ea31979e3f7977
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-345000
size: "30"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-345000
size: "4940000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-345000 image ls --format yaml --alsologtostderr:
I0718 20:41:17.000288    3490 out.go:291] Setting OutFile to fd 1 ...
I0718 20:41:17.000486    3490 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:17.000492    3490 out.go:304] Setting ErrFile to fd 2...
I0718 20:41:17.000498    3490 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:17.000671    3490 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
I0718 20:41:17.001390    3490 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:17.001493    3490 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:17.001869    3490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:17.001903    3490 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:17.010166    3490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50921
I0718 20:41:17.010563    3490 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:17.010972    3490 main.go:141] libmachine: Using API Version  1
I0718 20:41:17.010984    3490 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:17.011184    3490 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:17.011284    3490 main.go:141] libmachine: (functional-345000) Calling .GetState
I0718 20:41:17.011375    3490 main.go:141] libmachine: (functional-345000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0718 20:41:17.011441    3490 main.go:141] libmachine: (functional-345000) DBG | hyperkit pid from json: 2790
I0718 20:41:17.012632    3490 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:17.012651    3490 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:17.021047    3490 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50923
I0718 20:41:17.021365    3490 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:17.021681    3490 main.go:141] libmachine: Using API Version  1
I0718 20:41:17.021690    3490 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:17.021931    3490 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:17.022055    3490 main.go:141] libmachine: (functional-345000) Calling .DriverName
I0718 20:41:17.022213    3490 ssh_runner.go:195] Run: systemctl --version
I0718 20:41:17.022231    3490 main.go:141] libmachine: (functional-345000) Calling .GetSSHHostname
I0718 20:41:17.022312    3490 main.go:141] libmachine: (functional-345000) Calling .GetSSHPort
I0718 20:41:17.022400    3490 main.go:141] libmachine: (functional-345000) Calling .GetSSHKeyPath
I0718 20:41:17.022480    3490 main.go:141] libmachine: (functional-345000) Calling .GetSSHUsername
I0718 20:41:17.022570    3490 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/functional-345000/id_rsa Username:docker}
I0718 20:41:17.058620    3490 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0718 20:41:17.075985    3490 main.go:141] libmachine: Making call to close driver server
I0718 20:41:17.075993    3490 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:17.076138    3490 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:17.076146    3490 main.go:141] libmachine: Making call to close connection to plugin binary
I0718 20:41:17.076153    3490 main.go:141] libmachine: Making call to close driver server
I0718 20:41:17.076159    3490 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:17.076317    3490 main.go:141] libmachine: (functional-345000) DBG | Closing plugin on server side
I0718 20:41:17.076332    3490 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:17.076356    3490 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.86s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh pgrep buildkitd: exit status 1 (128.912624ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image build -t localhost/my-image:functional-345000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-345000 image build -t localhost/my-image:functional-345000 testdata/build --alsologtostderr: (1.574821612s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-345000 image build -t localhost/my-image:functional-345000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in c4fa4523dbdb
---> Removed intermediate container c4fa4523dbdb
---> 25f3b7b14441
Step 3/3 : ADD content.txt /
---> d23917f81095
Successfully built d23917f81095
Successfully tagged localhost/my-image:functional-345000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-345000 image build -t localhost/my-image:functional-345000 testdata/build --alsologtostderr:
I0718 20:41:17.284332    3499 out.go:291] Setting OutFile to fd 1 ...
I0718 20:41:17.284683    3499 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:17.284689    3499 out.go:304] Setting ErrFile to fd 2...
I0718 20:41:17.284693    3499 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0718 20:41:17.284890    3499 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
I0718 20:41:17.285513    3499 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:17.286161    3499 config.go:182] Loaded profile config "functional-345000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0718 20:41:17.286546    3499 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:17.286586    3499 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:17.294751    3499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50933
I0718 20:41:17.295140    3499 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:17.295571    3499 main.go:141] libmachine: Using API Version  1
I0718 20:41:17.295583    3499 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:17.295836    3499 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:17.295983    3499 main.go:141] libmachine: (functional-345000) Calling .GetState
I0718 20:41:17.296079    3499 main.go:141] libmachine: (functional-345000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0718 20:41:17.296135    3499 main.go:141] libmachine: (functional-345000) DBG | hyperkit pid from json: 2790
I0718 20:41:17.297347    3499 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0718 20:41:17.297369    3499 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0718 20:41:17.305671    3499 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50935
I0718 20:41:17.306023    3499 main.go:141] libmachine: () Calling .GetVersion
I0718 20:41:17.306346    3499 main.go:141] libmachine: Using API Version  1
I0718 20:41:17.306355    3499 main.go:141] libmachine: () Calling .SetConfigRaw
I0718 20:41:17.306587    3499 main.go:141] libmachine: () Calling .GetMachineName
I0718 20:41:17.306717    3499 main.go:141] libmachine: (functional-345000) Calling .DriverName
I0718 20:41:17.306899    3499 ssh_runner.go:195] Run: systemctl --version
I0718 20:41:17.306919    3499 main.go:141] libmachine: (functional-345000) Calling .GetSSHHostname
I0718 20:41:17.307008    3499 main.go:141] libmachine: (functional-345000) Calling .GetSSHPort
I0718 20:41:17.307096    3499 main.go:141] libmachine: (functional-345000) Calling .GetSSHKeyPath
I0718 20:41:17.307178    3499 main.go:141] libmachine: (functional-345000) Calling .GetSSHUsername
I0718 20:41:17.307264    3499 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/functional-345000/id_rsa Username:docker}
I0718 20:41:17.342519    3499 build_images.go:161] Building image from path: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1955844722.tar
I0718 20:41:17.342577    3499 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0718 20:41:17.351542    3499 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1955844722.tar
I0718 20:41:17.354791    3499 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1955844722.tar: stat -c "%s %y" /var/lib/minikube/build/build.1955844722.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1955844722.tar': No such file or directory
I0718 20:41:17.354818    3499 ssh_runner.go:362] scp /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1955844722.tar --> /var/lib/minikube/build/build.1955844722.tar (3072 bytes)
I0718 20:41:17.375059    3499 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1955844722
I0718 20:41:17.385158    3499 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1955844722 -xf /var/lib/minikube/build/build.1955844722.tar
I0718 20:41:17.393299    3499 docker.go:360] Building image: /var/lib/minikube/build/build.1955844722
I0718 20:41:17.393368    3499 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-345000 /var/lib/minikube/build/build.1955844722
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0718 20:41:18.762201    3499 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-345000 /var/lib/minikube/build/build.1955844722: (1.3687922s)
I0718 20:41:18.762261    3499 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1955844722
I0718 20:41:18.771212    3499 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1955844722.tar
I0718 20:41:18.779247    3499 build_images.go:217] Built localhost/my-image:functional-345000 from /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/build.1955844722.tar
I0718 20:41:18.779269    3499 build_images.go:133] succeeded building to: functional-345000
I0718 20:41:18.779273    3499 build_images.go:134] failed building to: 
I0718 20:41:18.779288    3499 main.go:141] libmachine: Making call to close driver server
I0718 20:41:18.779295    3499 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:18.779439    3499 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:18.779443    3499 main.go:141] libmachine: (functional-345000) DBG | Closing plugin on server side
I0718 20:41:18.779447    3499 main.go:141] libmachine: Making call to close connection to plugin binary
I0718 20:41:18.779453    3499 main.go:141] libmachine: Making call to close driver server
I0718 20:41:18.779457    3499 main.go:141] libmachine: (functional-345000) Calling .Close
I0718 20:41:18.779592    3499 main.go:141] libmachine: Successfully made call to close driver server
I0718 20:41:18.779599    3499 main.go:141] libmachine: Making call to close connection to plugin binary
I0718 20:41:18.779607    3499 main.go:141] libmachine: (functional-345000) DBG | Closing plugin on server side
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.86s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.894328957s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-345000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.93s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-345000 docker-env) && out/minikube-darwin-amd64 status -p functional-345000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-345000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.23s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image load --daemon docker.io/kicbase/echo-server:functional-345000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (0.91s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image load --daemon docker.io/kicbase/echo-server:functional-345000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-345000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image load --daemon docker.io/kicbase/echo-server:functional-345000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image save docker.io/kicbase/echo-server:functional-345000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image rm docker.io/kicbase/echo-server:functional-345000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.82s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-345000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 image save --daemon docker.io/kicbase/echo-server:functional-345000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-345000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (21.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-345000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-345000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-tf5h4" [d6d9ec22-9c2d-44ef-9fdb-e4250d0f603e] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-tf5h4" [d6d9ec22-9c2d-44ef-9fdb-e4250d0f603e] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 21.003953247s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (21.13s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.19s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 service list -o json
functional_test.go:1490: Took "182.872058ms" to run "out/minikube-darwin-amd64 -p functional-345000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.4:31234
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.4:31234
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-345000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-345000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-345000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 3246: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-345000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-345000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-345000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [582cb39d-d64f-47cd-b4c2-cc9a24163a5f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [582cb39d-d64f-47cd-b4c2-cc9a24163a5f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.003257397s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-345000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.105.142.249 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-345000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "178.522889ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "98.790343ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "180.510853ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "77.197658ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2482320943/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721360459573400000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2482320943/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721360459573400000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2482320943/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721360459573400000" to /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2482320943/001/test-1721360459573400000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (152.517045ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 19 03:40 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 19 03:40 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 19 03:40 test-1721360459573400000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh cat /mount-9p/test-1721360459573400000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-345000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [824cdf88-c6bd-49f2-84de-800c41e01151] Pending
helpers_test.go:344: "busybox-mount" [824cdf88-c6bd-49f2-84de-800c41e01151] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [824cdf88-c6bd-49f2-84de-800c41e01151] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [824cdf88-c6bd-49f2-84de-800c41e01151] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004813506s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-345000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdany-port2482320943/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.40s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3487339515/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3487339515/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3487339515/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T" /mount1: exit status 1 (173.27219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T" /mount1: exit status 1 (176.647075ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-345000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3487339515/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3487339515/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdVerifyCleanup3487339515/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.77s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-345000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-345000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-345000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (201.59s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-440000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0718 20:41:41.467342    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-440000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m21.201433929s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (201.59s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (8.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-440000 -- rollout status deployment/busybox: (6.4911421s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-2gxxp -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-9495f -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-c9f5b -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-2gxxp -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-9495f -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-c9f5b -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-2gxxp -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-9495f -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-c9f5b -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (8.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-2gxxp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-2gxxp -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-9495f -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-9495f -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-c9f5b -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-440000 -- exec busybox-fc5497c4f-c9f5b -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (52.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-440000 -v=7 --alsologtostderr
E0718 20:45:10.472785    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:10.478025    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:10.489590    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:10.511090    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:10.552503    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:10.633285    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:10.794360    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:11.116002    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:11.757420    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:13.039030    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:15.600255    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:20.720916    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 20:45:30.962634    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-440000 -v=7 --alsologtostderr: (51.692653392s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr
E0718 20:45:51.443616    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (52.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-440000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.28s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp testdata/cp-test.txt ha-440000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile248519523/001/cp-test_ha-440000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000:/home/docker/cp-test.txt ha-440000-m02:/home/docker/cp-test_ha-440000_ha-440000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m02 "sudo cat /home/docker/cp-test_ha-440000_ha-440000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000:/home/docker/cp-test.txt ha-440000-m03:/home/docker/cp-test_ha-440000_ha-440000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m03 "sudo cat /home/docker/cp-test_ha-440000_ha-440000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000:/home/docker/cp-test.txt ha-440000-m04:/home/docker/cp-test_ha-440000_ha-440000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m04 "sudo cat /home/docker/cp-test_ha-440000_ha-440000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp testdata/cp-test.txt ha-440000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile248519523/001/cp-test_ha-440000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m02:/home/docker/cp-test.txt ha-440000:/home/docker/cp-test_ha-440000-m02_ha-440000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000 "sudo cat /home/docker/cp-test_ha-440000-m02_ha-440000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m02:/home/docker/cp-test.txt ha-440000-m03:/home/docker/cp-test_ha-440000-m02_ha-440000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m03 "sudo cat /home/docker/cp-test_ha-440000-m02_ha-440000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m02:/home/docker/cp-test.txt ha-440000-m04:/home/docker/cp-test_ha-440000-m02_ha-440000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m04 "sudo cat /home/docker/cp-test_ha-440000-m02_ha-440000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp testdata/cp-test.txt ha-440000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile248519523/001/cp-test_ha-440000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m03:/home/docker/cp-test.txt ha-440000:/home/docker/cp-test_ha-440000-m03_ha-440000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000 "sudo cat /home/docker/cp-test_ha-440000-m03_ha-440000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m03:/home/docker/cp-test.txt ha-440000-m02:/home/docker/cp-test_ha-440000-m03_ha-440000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m02 "sudo cat /home/docker/cp-test_ha-440000-m03_ha-440000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m03:/home/docker/cp-test.txt ha-440000-m04:/home/docker/cp-test_ha-440000-m03_ha-440000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m04 "sudo cat /home/docker/cp-test_ha-440000-m03_ha-440000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp testdata/cp-test.txt ha-440000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m04:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiControlPlaneserialCopyFile248519523/001/cp-test_ha-440000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m04:/home/docker/cp-test.txt ha-440000:/home/docker/cp-test_ha-440000-m04_ha-440000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000 "sudo cat /home/docker/cp-test_ha-440000-m04_ha-440000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m04:/home/docker/cp-test.txt ha-440000-m02:/home/docker/cp-test_ha-440000-m04_ha-440000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m02 "sudo cat /home/docker/cp-test_ha-440000-m04_ha-440000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 cp ha-440000-m04:/home/docker/cp-test.txt ha-440000-m03:/home/docker/cp-test_ha-440000-m04_ha-440000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 ssh -n ha-440000-m03 "sudo cat /home/docker/cp-test_ha-440000-m04_ha-440000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.28s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-440000 node stop m02 -v=7 --alsologtostderr: (8.341468815s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr: exit status 7 (358.496321ms)

                                                
                                                
-- stdout --
	ha-440000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-440000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-440000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-440000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:46:09.654280    4053 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:46:09.654562    4053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:46:09.654567    4053 out.go:304] Setting ErrFile to fd 2...
	I0718 20:46:09.654572    4053 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:46:09.654747    4053 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 20:46:09.654923    4053 out.go:298] Setting JSON to false
	I0718 20:46:09.654945    4053 mustload.go:65] Loading cluster: ha-440000
	I0718 20:46:09.654986    4053 notify.go:220] Checking for updates...
	I0718 20:46:09.655275    4053 config.go:182] Loaded profile config "ha-440000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:46:09.655290    4053 status.go:255] checking status of ha-440000 ...
	I0718 20:46:09.655646    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.655689    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.664432    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51745
	I0718 20:46:09.664766    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.665158    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.665169    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.665376    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.665490    4053 main.go:141] libmachine: (ha-440000) Calling .GetState
	I0718 20:46:09.665581    4053 main.go:141] libmachine: (ha-440000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 20:46:09.665664    4053 main.go:141] libmachine: (ha-440000) DBG | hyperkit pid from json: 3586
	I0718 20:46:09.666698    4053 status.go:330] ha-440000 host status = "Running" (err=<nil>)
	I0718 20:46:09.666718    4053 host.go:66] Checking if "ha-440000" exists ...
	I0718 20:46:09.667001    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.667025    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.675375    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51747
	I0718 20:46:09.675716    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.676086    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.676106    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.676325    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.676437    4053 main.go:141] libmachine: (ha-440000) Calling .GetIP
	I0718 20:46:09.676517    4053 host.go:66] Checking if "ha-440000" exists ...
	I0718 20:46:09.676780    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.676801    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.685470    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51749
	I0718 20:46:09.685813    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.686225    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.686241    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.686437    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.686539    4053 main.go:141] libmachine: (ha-440000) Calling .DriverName
	I0718 20:46:09.686690    4053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:46:09.686710    4053 main.go:141] libmachine: (ha-440000) Calling .GetSSHHostname
	I0718 20:46:09.686787    4053 main.go:141] libmachine: (ha-440000) Calling .GetSSHPort
	I0718 20:46:09.686858    4053 main.go:141] libmachine: (ha-440000) Calling .GetSSHKeyPath
	I0718 20:46:09.686932    4053 main.go:141] libmachine: (ha-440000) Calling .GetSSHUsername
	I0718 20:46:09.687013    4053 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/ha-440000/id_rsa Username:docker}
	I0718 20:46:09.722857    4053 ssh_runner.go:195] Run: systemctl --version
	I0718 20:46:09.727196    4053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:46:09.739144    4053 kubeconfig.go:125] found "ha-440000" server: "https://192.169.0.254:8443"
	I0718 20:46:09.739171    4053 api_server.go:166] Checking apiserver status ...
	I0718 20:46:09.739211    4053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:46:09.751113    4053 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2063/cgroup
	W0718 20:46:09.759230    4053 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2063/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:46:09.759274    4053 ssh_runner.go:195] Run: ls
	I0718 20:46:09.762712    4053 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0718 20:46:09.765895    4053 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0718 20:46:09.765906    4053 status.go:422] ha-440000 apiserver status = Running (err=<nil>)
	I0718 20:46:09.765915    4053 status.go:257] ha-440000 status: &{Name:ha-440000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:46:09.765926    4053 status.go:255] checking status of ha-440000-m02 ...
	I0718 20:46:09.766173    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.766196    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.774951    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51753
	I0718 20:46:09.775313    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.775626    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.775637    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.775831    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.775949    4053 main.go:141] libmachine: (ha-440000-m02) Calling .GetState
	I0718 20:46:09.776040    4053 main.go:141] libmachine: (ha-440000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 20:46:09.776115    4053 main.go:141] libmachine: (ha-440000-m02) DBG | hyperkit pid from json: 3603
	I0718 20:46:09.777137    4053 main.go:141] libmachine: (ha-440000-m02) DBG | hyperkit pid 3603 missing from process table
	I0718 20:46:09.777169    4053 status.go:330] ha-440000-m02 host status = "Stopped" (err=<nil>)
	I0718 20:46:09.777176    4053 status.go:343] host is not running, skipping remaining checks
	I0718 20:46:09.777184    4053 status.go:257] ha-440000-m02 status: &{Name:ha-440000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:46:09.777194    4053 status.go:255] checking status of ha-440000-m03 ...
	I0718 20:46:09.777443    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.777464    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.786025    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51755
	I0718 20:46:09.786364    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.786690    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.786705    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.786888    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.786986    4053 main.go:141] libmachine: (ha-440000-m03) Calling .GetState
	I0718 20:46:09.787070    4053 main.go:141] libmachine: (ha-440000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 20:46:09.787159    4053 main.go:141] libmachine: (ha-440000-m03) DBG | hyperkit pid from json: 3623
	I0718 20:46:09.788135    4053 status.go:330] ha-440000-m03 host status = "Running" (err=<nil>)
	I0718 20:46:09.788145    4053 host.go:66] Checking if "ha-440000-m03" exists ...
	I0718 20:46:09.788405    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.788426    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.796865    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51757
	I0718 20:46:09.797233    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.797581    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.797597    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.797795    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.797905    4053 main.go:141] libmachine: (ha-440000-m03) Calling .GetIP
	I0718 20:46:09.797989    4053 host.go:66] Checking if "ha-440000-m03" exists ...
	I0718 20:46:09.798250    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.798273    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.806587    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51759
	I0718 20:46:09.806931    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.807219    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.807228    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.807447    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.807550    4053 main.go:141] libmachine: (ha-440000-m03) Calling .DriverName
	I0718 20:46:09.807677    4053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:46:09.807690    4053 main.go:141] libmachine: (ha-440000-m03) Calling .GetSSHHostname
	I0718 20:46:09.807774    4053 main.go:141] libmachine: (ha-440000-m03) Calling .GetSSHPort
	I0718 20:46:09.807851    4053 main.go:141] libmachine: (ha-440000-m03) Calling .GetSSHKeyPath
	I0718 20:46:09.807934    4053 main.go:141] libmachine: (ha-440000-m03) Calling .GetSSHUsername
	I0718 20:46:09.808008    4053 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/ha-440000-m03/id_rsa Username:docker}
	I0718 20:46:09.840820    4053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:46:09.851570    4053 kubeconfig.go:125] found "ha-440000" server: "https://192.169.0.254:8443"
	I0718 20:46:09.851589    4053 api_server.go:166] Checking apiserver status ...
	I0718 20:46:09.851627    4053 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 20:46:09.862888    4053 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2080/cgroup
	W0718 20:46:09.872384    4053 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2080/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 20:46:09.872440    4053 ssh_runner.go:195] Run: ls
	I0718 20:46:09.875849    4053 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0718 20:46:09.879607    4053 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0718 20:46:09.879620    4053 status.go:422] ha-440000-m03 apiserver status = Running (err=<nil>)
	I0718 20:46:09.879629    4053 status.go:257] ha-440000-m03 status: &{Name:ha-440000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:46:09.879639    4053 status.go:255] checking status of ha-440000-m04 ...
	I0718 20:46:09.879913    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.879941    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.888326    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51763
	I0718 20:46:09.888670    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.889032    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.889047    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.889264    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.889370    4053 main.go:141] libmachine: (ha-440000-m04) Calling .GetState
	I0718 20:46:09.889449    4053 main.go:141] libmachine: (ha-440000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 20:46:09.889531    4053 main.go:141] libmachine: (ha-440000-m04) DBG | hyperkit pid from json: 3728
	I0718 20:46:09.890541    4053 status.go:330] ha-440000-m04 host status = "Running" (err=<nil>)
	I0718 20:46:09.890550    4053 host.go:66] Checking if "ha-440000-m04" exists ...
	I0718 20:46:09.890805    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.890826    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.899393    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51765
	I0718 20:46:09.899791    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.900143    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.900160    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.900367    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.900500    4053 main.go:141] libmachine: (ha-440000-m04) Calling .GetIP
	I0718 20:46:09.900591    4053 host.go:66] Checking if "ha-440000-m04" exists ...
	I0718 20:46:09.900860    4053 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:46:09.900882    4053 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:46:09.909360    4053 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51767
	I0718 20:46:09.909698    4053 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:46:09.910033    4053 main.go:141] libmachine: Using API Version  1
	I0718 20:46:09.910049    4053 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:46:09.910256    4053 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:46:09.910364    4053 main.go:141] libmachine: (ha-440000-m04) Calling .DriverName
	I0718 20:46:09.910491    4053 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 20:46:09.910511    4053 main.go:141] libmachine: (ha-440000-m04) Calling .GetSSHHostname
	I0718 20:46:09.910591    4053 main.go:141] libmachine: (ha-440000-m04) Calling .GetSSHPort
	I0718 20:46:09.910670    4053 main.go:141] libmachine: (ha-440000-m04) Calling .GetSSHKeyPath
	I0718 20:46:09.910754    4053 main.go:141] libmachine: (ha-440000-m04) Calling .GetSSHUsername
	I0718 20:46:09.910834    4053 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/ha-440000-m04/id_rsa Username:docker}
	I0718 20:46:09.945368    4053 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 20:46:09.956630    4053 status.go:257] ha-440000-m04 status: &{Name:ha-440000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (41.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 node start m02 -v=7 --alsologtostderr
E0718 20:46:13.775945    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:46:32.405136    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-440000 node start m02 -v=7 --alsologtostderr: (40.677520162s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (41.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.33s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (195.88s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-440000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-440000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-440000 -v=7 --alsologtostderr: (27.115400683s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-440000 --wait=true -v=7 --alsologtostderr
E0718 20:47:54.328138    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-440000 --wait=true -v=7 --alsologtostderr: (2m48.650281081s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-440000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (195.88s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.11s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 node delete m03 -v=7 --alsologtostderr
E0718 20:50:10.461257    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-440000 node delete m03 -v=7 --alsologtostderr: (7.674063213s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.11s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.97s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 stop -v=7 --alsologtostderr
E0718 20:50:38.153965    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-440000 stop -v=7 --alsologtostderr: (24.879910984s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr: exit status 7 (90.252252ms)

                                                
                                                
-- stdout --
	ha-440000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-440000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-440000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 20:50:40.902421    4220 out.go:291] Setting OutFile to fd 1 ...
	I0718 20:50:40.902614    4220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:50:40.902619    4220 out.go:304] Setting ErrFile to fd 2...
	I0718 20:50:40.902623    4220 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 20:50:40.902808    4220 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 20:50:40.903617    4220 out.go:298] Setting JSON to false
	I0718 20:50:40.903918    4220 notify.go:220] Checking for updates...
	I0718 20:50:40.903917    4220 mustload.go:65] Loading cluster: ha-440000
	I0718 20:50:40.904243    4220 config.go:182] Loaded profile config "ha-440000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 20:50:40.904259    4220 status.go:255] checking status of ha-440000 ...
	I0718 20:50:40.904640    4220 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:50:40.904687    4220 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:50:40.913797    4220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52074
	I0718 20:50:40.914121    4220 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:50:40.914530    4220 main.go:141] libmachine: Using API Version  1
	I0718 20:50:40.914540    4220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:50:40.914776    4220 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:50:40.914887    4220 main.go:141] libmachine: (ha-440000) Calling .GetState
	I0718 20:50:40.914967    4220 main.go:141] libmachine: (ha-440000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 20:50:40.915032    4220 main.go:141] libmachine: (ha-440000) DBG | hyperkit pid from json: 4137
	I0718 20:50:40.915940    4220 main.go:141] libmachine: (ha-440000) DBG | hyperkit pid 4137 missing from process table
	I0718 20:50:40.915975    4220 status.go:330] ha-440000 host status = "Stopped" (err=<nil>)
	I0718 20:50:40.915984    4220 status.go:343] host is not running, skipping remaining checks
	I0718 20:50:40.915990    4220 status.go:257] ha-440000 status: &{Name:ha-440000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:50:40.916009    4220 status.go:255] checking status of ha-440000-m02 ...
	I0718 20:50:40.916257    4220 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:50:40.916278    4220 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:50:40.924554    4220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52076
	I0718 20:50:40.924889    4220 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:50:40.925241    4220 main.go:141] libmachine: Using API Version  1
	I0718 20:50:40.925264    4220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:50:40.925470    4220 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:50:40.925583    4220 main.go:141] libmachine: (ha-440000-m02) Calling .GetState
	I0718 20:50:40.925676    4220 main.go:141] libmachine: (ha-440000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 20:50:40.925760    4220 main.go:141] libmachine: (ha-440000-m02) DBG | hyperkit pid from json: 4145
	I0718 20:50:40.926684    4220 main.go:141] libmachine: (ha-440000-m02) DBG | hyperkit pid 4145 missing from process table
	I0718 20:50:40.926727    4220 status.go:330] ha-440000-m02 host status = "Stopped" (err=<nil>)
	I0718 20:50:40.926737    4220 status.go:343] host is not running, skipping remaining checks
	I0718 20:50:40.926750    4220 status.go:257] ha-440000-m02 status: &{Name:ha-440000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 20:50:40.926761    4220 status.go:255] checking status of ha-440000-m04 ...
	I0718 20:50:40.927007    4220 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 20:50:40.927033    4220 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 20:50:40.935330    4220 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52078
	I0718 20:50:40.935671    4220 main.go:141] libmachine: () Calling .GetVersion
	I0718 20:50:40.936017    4220 main.go:141] libmachine: Using API Version  1
	I0718 20:50:40.936033    4220 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 20:50:40.936234    4220 main.go:141] libmachine: () Calling .GetMachineName
	I0718 20:50:40.936337    4220 main.go:141] libmachine: (ha-440000-m04) Calling .GetState
	I0718 20:50:40.936414    4220 main.go:141] libmachine: (ha-440000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 20:50:40.936496    4220 main.go:141] libmachine: (ha-440000-m04) DBG | hyperkit pid from json: 4163
	I0718 20:50:40.937438    4220 main.go:141] libmachine: (ha-440000-m04) DBG | hyperkit pid 4163 missing from process table
	I0718 20:50:40.937483    4220 status.go:330] ha-440000-m04 host status = "Stopped" (err=<nil>)
	I0718 20:50:40.937494    4220 status.go:343] host is not running, skipping remaining checks
	I0718 20:50:40.937500    4220 status.go:257] ha-440000-m04 status: &{Name:ha-440000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.97s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (202.69s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-440000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0718 20:51:13.764182    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 20:52:36.823515    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-darwin-amd64 start -p ha-440000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : (3m22.239062527s)
ha_test.go:566: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/RestartCluster (202.69s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (75.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-440000 --control-plane -v=7 --alsologtostderr
E0718 20:55:10.464734    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-440000 --control-plane -v=7 --alsologtostderr: (1m14.884881115s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-440000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddSecondaryNode (75.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (40.91s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-613000 --driver=hyperkit 
E0718 20:56:13.768301    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-613000 --driver=hyperkit : (40.913450092s)
--- PASS: TestImageBuild/serial/Setup (40.91s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.25s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-613000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-613000: (1.253400096s)
--- PASS: TestImageBuild/serial/NormalBuild (1.25s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.48s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-613000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.48s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-613000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-613000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (51.44s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-851000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-851000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (51.441195754s)
--- PASS: TestJSONOutput/start/Command (51.44s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.48s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-851000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.48s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.45s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-851000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.45s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.35s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-851000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-851000 --output=json --user=testUser: (8.349780275s)
--- PASS: TestJSONOutput/stop/Command (8.35s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-854000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-854000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (357.738264ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"ce22b8be-a5fb-4239-902f-64259626acb9","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-854000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"de6db5c4-7c43-4b7f-be69-c6e05b29058a","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19302"}}
	{"specversion":"1.0","id":"8d25a285-0c49-4e39-ad78-e8cc5d48c247","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig"}}
	{"specversion":"1.0","id":"3915b189-f587-4415-b2ac-bae5e6b58e84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"113f9c67-7d30-4353-9aeb-e1358691bee7","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dccfaa34-4354-403f-8e23-1748f6ce0a0c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube"}}
	{"specversion":"1.0","id":"5c768c33-ada6-4ec4-8894-e25955a143cd","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"166be909-f87f-4ea8-ac91-49a507a2192d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-854000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-854000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (217.41s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-324000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-324000 --driver=hyperkit : (54.09441914s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-337000 --driver=hyperkit 
E0718 21:00:10.469747    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-337000 --driver=hyperkit : (2m33.894489142s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-324000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-337000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-337000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-337000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-337000: (3.42712135s)
helpers_test.go:175: Cleaning up "first-324000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-324000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-324000: (5.256878769s)
--- PASS: TestMinikubeProfile (217.41s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.59s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-533000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0718 21:01:13.773592    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-533000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.593636531s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.59s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-533000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-533000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.29s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (18.4s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-544000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0718 21:01:33.526216    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-544000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (17.398657674s)
--- PASS: TestMountStart/serial/StartWithMountSecond (18.40s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-544000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-544000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.33s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.34s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-533000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-533000 --alsologtostderr -v=5: (2.341357896s)
--- PASS: TestMountStart/serial/DeleteFirst (2.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-544000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-544000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.30s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-544000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-544000: (2.377771963s)
--- PASS: TestMountStart/serial/Stop (2.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (20.06s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-544000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-544000: (19.055866209s)
--- PASS: TestMountStart/serial/RestartStopped (20.06s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.3s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-544000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-544000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.30s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (115.56s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-127000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-127000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m55.310153164s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (115.56s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.13s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-127000 -- rollout status deployment/busybox: (2.504088272s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-t4zx9 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-zzsc5 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-t4zx9 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-zzsc5 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-t4zx9 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-zzsc5 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.13s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-t4zx9 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-t4zx9 -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-zzsc5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-127000 -- exec busybox-fc5497c4f-zzsc5 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.86s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (44.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-127000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-127000 -v 3 --alsologtostderr: (44.168103612s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (44.48s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-127000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.28s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp testdata/cp-test.txt multinode-127000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp multinode-127000:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile4019457516/001/cp-test_multinode-127000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp multinode-127000:/home/docker/cp-test.txt multinode-127000-m02:/home/docker/cp-test_multinode-127000_multinode-127000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m02 "sudo cat /home/docker/cp-test_multinode-127000_multinode-127000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp multinode-127000:/home/docker/cp-test.txt multinode-127000-m03:/home/docker/cp-test_multinode-127000_multinode-127000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m03 "sudo cat /home/docker/cp-test_multinode-127000_multinode-127000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp testdata/cp-test.txt multinode-127000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp multinode-127000-m02:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile4019457516/001/cp-test_multinode-127000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp multinode-127000-m02:/home/docker/cp-test.txt multinode-127000:/home/docker/cp-test_multinode-127000-m02_multinode-127000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000 "sudo cat /home/docker/cp-test_multinode-127000-m02_multinode-127000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp multinode-127000-m02:/home/docker/cp-test.txt multinode-127000-m03:/home/docker/cp-test_multinode-127000-m02_multinode-127000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m03 "sudo cat /home/docker/cp-test_multinode-127000-m02_multinode-127000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp testdata/cp-test.txt multinode-127000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp multinode-127000-m03:/home/docker/cp-test.txt /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestMultiNodeserialCopyFile4019457516/001/cp-test_multinode-127000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp multinode-127000-m03:/home/docker/cp-test.txt multinode-127000:/home/docker/cp-test_multinode-127000-m03_multinode-127000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000 "sudo cat /home/docker/cp-test_multinode-127000-m03_multinode-127000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 cp multinode-127000-m03:/home/docker/cp-test.txt multinode-127000-m02:/home/docker/cp-test_multinode-127000-m03_multinode-127000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 ssh -n multinode-127000-m02 "sudo cat /home/docker/cp-test_multinode-127000-m03_multinode-127000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.28s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-127000 node stop m03: (2.344172553s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-127000 status: exit status 7 (249.865561ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-127000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-127000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-127000 status --alsologtostderr: exit status 7 (257.599995ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-127000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-127000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:05:09.679951    5272 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:05:09.680136    5272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:05:09.680142    5272 out.go:304] Setting ErrFile to fd 2...
	I0718 21:05:09.680146    5272 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:05:09.680342    5272 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 21:05:09.680518    5272 out.go:298] Setting JSON to false
	I0718 21:05:09.680540    5272 mustload.go:65] Loading cluster: multinode-127000
	I0718 21:05:09.680577    5272 notify.go:220] Checking for updates...
	I0718 21:05:09.680843    5272 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:05:09.680857    5272 status.go:255] checking status of multinode-127000 ...
	I0718 21:05:09.681241    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:05:09.681296    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:05:09.690307    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53169
	I0718 21:05:09.690637    5272 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:05:09.691072    5272 main.go:141] libmachine: Using API Version  1
	I0718 21:05:09.691081    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:05:09.691270    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:05:09.691385    5272 main.go:141] libmachine: (multinode-127000) Calling .GetState
	I0718 21:05:09.691466    5272 main.go:141] libmachine: (multinode-127000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:05:09.691538    5272 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid from json: 4983
	I0718 21:05:09.692726    5272 status.go:330] multinode-127000 host status = "Running" (err=<nil>)
	I0718 21:05:09.692743    5272 host.go:66] Checking if "multinode-127000" exists ...
	I0718 21:05:09.693005    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:05:09.693031    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:05:09.701574    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53171
	I0718 21:05:09.701909    5272 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:05:09.702297    5272 main.go:141] libmachine: Using API Version  1
	I0718 21:05:09.702318    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:05:09.708582    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:05:09.708724    5272 main.go:141] libmachine: (multinode-127000) Calling .GetIP
	I0718 21:05:09.708802    5272 host.go:66] Checking if "multinode-127000" exists ...
	I0718 21:05:09.709038    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:05:09.709057    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:05:09.717476    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53173
	I0718 21:05:09.717833    5272 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:05:09.718151    5272 main.go:141] libmachine: Using API Version  1
	I0718 21:05:09.718162    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:05:09.718385    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:05:09.718499    5272 main.go:141] libmachine: (multinode-127000) Calling .DriverName
	I0718 21:05:09.718640    5272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 21:05:09.718660    5272 main.go:141] libmachine: (multinode-127000) Calling .GetSSHHostname
	I0718 21:05:09.718738    5272 main.go:141] libmachine: (multinode-127000) Calling .GetSSHPort
	I0718 21:05:09.718843    5272 main.go:141] libmachine: (multinode-127000) Calling .GetSSHKeyPath
	I0718 21:05:09.718939    5272 main.go:141] libmachine: (multinode-127000) Calling .GetSSHUsername
	I0718 21:05:09.719021    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.17 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000/id_rsa Username:docker}
	I0718 21:05:09.753136    5272 ssh_runner.go:195] Run: systemctl --version
	I0718 21:05:09.757607    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 21:05:09.769415    5272 kubeconfig.go:125] found "multinode-127000" server: "https://192.169.0.17:8443"
	I0718 21:05:09.769438    5272 api_server.go:166] Checking apiserver status ...
	I0718 21:05:09.769473    5272 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0718 21:05:09.781405    5272 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1980/cgroup
	W0718 21:05:09.789577    5272 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1980/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0718 21:05:09.789630    5272 ssh_runner.go:195] Run: ls
	I0718 21:05:09.792752    5272 api_server.go:253] Checking apiserver healthz at https://192.169.0.17:8443/healthz ...
	I0718 21:05:09.795780    5272 api_server.go:279] https://192.169.0.17:8443/healthz returned 200:
	ok
	I0718 21:05:09.795790    5272 status.go:422] multinode-127000 apiserver status = Running (err=<nil>)
	I0718 21:05:09.795799    5272 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 21:05:09.795810    5272 status.go:255] checking status of multinode-127000-m02 ...
	I0718 21:05:09.796045    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:05:09.796065    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:05:09.804664    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53177
	I0718 21:05:09.805010    5272 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:05:09.805323    5272 main.go:141] libmachine: Using API Version  1
	I0718 21:05:09.805339    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:05:09.805549    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:05:09.805680    5272 main.go:141] libmachine: (multinode-127000-m02) Calling .GetState
	I0718 21:05:09.805766    5272 main.go:141] libmachine: (multinode-127000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:05:09.805836    5272 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid from json: 5002
	I0718 21:05:09.807023    5272 status.go:330] multinode-127000-m02 host status = "Running" (err=<nil>)
	I0718 21:05:09.807034    5272 host.go:66] Checking if "multinode-127000-m02" exists ...
	I0718 21:05:09.807312    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:05:09.807340    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:05:09.815838    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53179
	I0718 21:05:09.816194    5272 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:05:09.816543    5272 main.go:141] libmachine: Using API Version  1
	I0718 21:05:09.816561    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:05:09.816759    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:05:09.816869    5272 main.go:141] libmachine: (multinode-127000-m02) Calling .GetIP
	I0718 21:05:09.816952    5272 host.go:66] Checking if "multinode-127000-m02" exists ...
	I0718 21:05:09.817186    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:05:09.817208    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:05:09.825519    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53181
	I0718 21:05:09.825859    5272 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:05:09.826197    5272 main.go:141] libmachine: Using API Version  1
	I0718 21:05:09.826214    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:05:09.826424    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:05:09.826551    5272 main.go:141] libmachine: (multinode-127000-m02) Calling .DriverName
	I0718 21:05:09.826685    5272 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0718 21:05:09.826697    5272 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHHostname
	I0718 21:05:09.826780    5272 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHPort
	I0718 21:05:09.826850    5272 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHKeyPath
	I0718 21:05:09.826955    5272 main.go:141] libmachine: (multinode-127000-m02) Calling .GetSSHUsername
	I0718 21:05:09.827030    5272 sshutil.go:53] new ssh client: &{IP:192.169.0.18 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19302-1411/.minikube/machines/multinode-127000-m02/id_rsa Username:docker}
	I0718 21:05:09.860885    5272 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0718 21:05:09.871190    5272 status.go:257] multinode-127000-m02 status: &{Name:multinode-127000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0718 21:05:09.871208    5272 status.go:255] checking status of multinode-127000-m03 ...
	I0718 21:05:09.871474    5272 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:05:09.871496    5272 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:05:09.880098    5272 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53184
	I0718 21:05:09.880430    5272 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:05:09.880783    5272 main.go:141] libmachine: Using API Version  1
	I0718 21:05:09.880799    5272 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:05:09.881006    5272 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:05:09.881129    5272 main.go:141] libmachine: (multinode-127000-m03) Calling .GetState
	I0718 21:05:09.881211    5272 main.go:141] libmachine: (multinode-127000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:05:09.881284    5272 main.go:141] libmachine: (multinode-127000-m03) DBG | hyperkit pid from json: 5067
	I0718 21:05:09.882436    5272 main.go:141] libmachine: (multinode-127000-m03) DBG | hyperkit pid 5067 missing from process table
	I0718 21:05:09.882455    5272 status.go:330] multinode-127000-m03 host status = "Stopped" (err=<nil>)
	I0718 21:05:09.882461    5272 status.go:343] host is not running, skipping remaining checks
	I0718 21:05:09.882468    5272 status.go:257] multinode-127000-m03 status: &{Name:multinode-127000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.85s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (41.7s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 node start m03 -v=7 --alsologtostderr
E0718 21:05:10.526377    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-127000 node start m03 -v=7 --alsologtostderr: (41.333974569s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (41.70s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (175.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-127000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-127000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-127000: (18.834526326s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr
E0718 21:06:13.831269    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-127000 --wait=true -v=8 --alsologtostderr: (2m36.93058866s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-127000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (175.88s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.43s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-127000 node delete m03: (3.08934555s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.43s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.78s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-127000 stop: (16.608729484s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-127000 status: exit status 7 (84.047771ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-127000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-127000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-127000 status --alsologtostderr: exit status 7 (83.758967ms)

                                                
                                                
-- stdout --
	multinode-127000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-127000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0718 21:09:07.649224    5398 out.go:291] Setting OutFile to fd 1 ...
	I0718 21:09:07.649417    5398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:07.649422    5398 out.go:304] Setting ErrFile to fd 2...
	I0718 21:09:07.649426    5398 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0718 21:09:07.649610    5398 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19302-1411/.minikube/bin
	I0718 21:09:07.649787    5398 out.go:298] Setting JSON to false
	I0718 21:09:07.649810    5398 mustload.go:65] Loading cluster: multinode-127000
	I0718 21:09:07.649855    5398 notify.go:220] Checking for updates...
	I0718 21:09:07.650098    5398 config.go:182] Loaded profile config "multinode-127000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0718 21:09:07.650114    5398 status.go:255] checking status of multinode-127000 ...
	I0718 21:09:07.650455    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:07.650497    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:09:07.659265    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53414
	I0718 21:09:07.659592    5398 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:09:07.659996    5398 main.go:141] libmachine: Using API Version  1
	I0718 21:09:07.660011    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:09:07.660237    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:09:07.660335    5398 main.go:141] libmachine: (multinode-127000) Calling .GetState
	I0718 21:09:07.660416    5398 main.go:141] libmachine: (multinode-127000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:09:07.660522    5398 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid from json: 5329
	I0718 21:09:07.661395    5398 main.go:141] libmachine: (multinode-127000) DBG | hyperkit pid 5329 missing from process table
	I0718 21:09:07.661420    5398 status.go:330] multinode-127000 host status = "Stopped" (err=<nil>)
	I0718 21:09:07.661430    5398 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:07.661436    5398 status.go:257] multinode-127000 status: &{Name:multinode-127000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0718 21:09:07.661455    5398 status.go:255] checking status of multinode-127000-m02 ...
	I0718 21:09:07.661694    5398 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0718 21:09:07.661719    5398 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0718 21:09:07.670091    5398 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53416
	I0718 21:09:07.670415    5398 main.go:141] libmachine: () Calling .GetVersion
	I0718 21:09:07.670772    5398 main.go:141] libmachine: Using API Version  1
	I0718 21:09:07.670789    5398 main.go:141] libmachine: () Calling .SetConfigRaw
	I0718 21:09:07.671014    5398 main.go:141] libmachine: () Calling .GetMachineName
	I0718 21:09:07.671137    5398 main.go:141] libmachine: (multinode-127000-m02) Calling .GetState
	I0718 21:09:07.671226    5398 main.go:141] libmachine: (multinode-127000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0718 21:09:07.671306    5398 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid from json: 5340
	I0718 21:09:07.677738    5398 status.go:330] multinode-127000-m02 host status = "Stopped" (err=<nil>)
	I0718 21:09:07.677741    5398 main.go:141] libmachine: (multinode-127000-m02) DBG | hyperkit pid 5340 missing from process table
	I0718 21:09:07.677748    5398 status.go:343] host is not running, skipping remaining checks
	I0718 21:09:07.677755    5398 status.go:257] multinode-127000-m02 status: &{Name:multinode-127000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.78s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (43.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-127000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-127000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-127000-m02 --driver=hyperkit : exit status 14 (396.448362ms)

                                                
                                                
-- stdout --
	* [multinode-127000-m02] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-127000-m02' is duplicated with machine name 'multinode-127000-m02' in profile 'multinode-127000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-127000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-127000-m03 --driver=hyperkit : (39.092828671s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-127000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-127000: exit status 80 (263.305767ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-127000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-127000-m03 already exists in multinode-127000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-127000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-127000-m03: (3.423963324s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (43.23s)

                                                
                                    
x
+
TestPreload (176.22s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-823000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-823000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m52.014107285s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-823000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-823000 image pull gcr.io/k8s-minikube/busybox: (1.336491548s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-823000
E0718 21:15:10.545796    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-823000: (8.399470327s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-823000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-823000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (49.059580551s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-823000 image list
helpers_test.go:175: Cleaning up "test-preload-823000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-823000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-823000: (5.251972782s)
--- PASS: TestPreload (176.22s)

                                                
                                    
x
+
TestSkaffold (112.64s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3733396404 version
skaffold_test.go:59: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3733396404 version: (1.735219162s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-456000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-456000 --memory=2600 --driver=hyperkit : (36.387588016s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3733396404 run --minikube-profile skaffold-456000 --kube-context skaffold-456000 --status-check=true --port-forward=false --interactive=false
E0718 21:18:13.606084    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/skaffold.exe3733396404 run --minikube-profile skaffold-456000 --kube-context skaffold-456000 --status-check=true --port-forward=false --interactive=false: (56.120367935s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-6b5c54f5f8-6ts7s" [3bacaa99-3e6f-443a-bdd1-9581143db9c7] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.005039286s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-b57c589-t5vgv" [4c1ca79e-8af9-4775-a4c6-177ed8d22b9a] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.003866353s
helpers_test.go:175: Cleaning up "skaffold-456000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-456000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-456000: (5.256965935s)
--- PASS: TestSkaffold (112.64s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (88.63s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1206259311 start -p running-upgrade-828000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1206259311 start -p running-upgrade-828000 --memory=2200 --vm-driver=hyperkit : (49.670326392s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-828000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-828000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (32.261182797s)
helpers_test.go:175: Cleaning up "running-upgrade-828000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-828000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-828000: (5.241978163s)
--- PASS: TestRunningBinaryUpgrade (88.63s)

                                                
                                    
x
+
TestKubernetesUpgrade (121.17s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-296000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
E0718 21:24:06.254352    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:06.260280    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:06.270803    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:06.291064    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:06.332465    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:06.413442    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:06.575534    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:06.896801    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:07.537290    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:08.818858    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:11.379726    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:16.501054    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:24:26.741898    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-296000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (53.53608473s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-296000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-296000: (2.37886215s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-296000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-296000 status --format={{.Host}}: exit status 7 (66.74512ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-296000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-296000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (34.335633232s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-296000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-296000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-296000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (519.951056ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-296000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-296000
	    minikube start -p kubernetes-upgrade-296000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-2960002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-296000 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-296000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
E0718 21:25:10.564294    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-296000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (25.036213899s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-296000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-296000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-296000: (5.243949548s)
--- PASS: TestKubernetesUpgrade (121.17s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.54s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1290676151/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1290676151/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1290676151/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current1290676151/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.54s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.71s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19302
- KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current669596646/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current669596646/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current669596646/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current669596646/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.71s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.91s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (114.92s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1375991422 start -p stopped-upgrade-002000 --memory=2200 --vm-driver=hyperkit 
E0718 21:25:28.186981    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1375991422 start -p stopped-upgrade-002000 --memory=2200 --vm-driver=hyperkit : (43.057694951s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1375991422 -p stopped-upgrade-002000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/minikube-v1.26.0.1375991422 -p stopped-upgrade-002000 stop: (8.314670688s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-002000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0718 21:26:13.870489    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 21:26:50.111796    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-002000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (1m3.543531198s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (114.92s)

                                                
                                    
x
+
TestPause/serial/Start (90.12s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-928000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
E0718 21:25:56.931225    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-928000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (1m30.121641696s)
--- PASS: TestPause/serial/Start (90.12s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.24s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-928000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-928000 --alsologtostderr -v=1 --driver=hyperkit : (38.224153916s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.24s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.84s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-002000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-002000: (2.841235906s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.84s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.77s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-347000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-347000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (767.338287ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-347000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19302
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19302-1411/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19302-1411/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.77s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (40.78s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-347000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-347000 --driver=hyperkit : (40.617353796s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-347000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (40.78s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-928000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-928000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-928000 --output=json --layout=cluster: exit status 2 (161.57364ms)

                                                
                                                
-- stdout --
	{"Name":"pause-928000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-928000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.52s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-928000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.52s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.58s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-928000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.58s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.24s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-928000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-928000 --alsologtostderr -v=5: (5.236536674s)
--- PASS: TestPause/serial/DeletePaused (5.24s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (63.88s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m3.87875531s)
--- PASS: TestNetworkPlugins/group/auto/Start (63.88s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (13.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-347000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-347000 --no-kubernetes --driver=hyperkit : (10.975052891s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-347000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-347000 status -o json: exit status 2 (146.65047ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-347000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-347000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-347000: (2.39036761s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (13.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (20.87s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-347000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-347000 --no-kubernetes --driver=hyperkit : (20.867231213s)
--- PASS: TestNoKubernetes/serial/Start (20.87s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-347000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-347000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (126.370866ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.47s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-347000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-347000: (2.397605105s)
--- PASS: TestNoKubernetes/serial/Stop (2.40s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-709000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-709000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-st64c" [1a83421e-35e1-4c0a-a7b2-9c5c53f3473a] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-st64c" [1a83421e-35e1-4c0a-a7b2-9c5c53f3473a] Running
E0718 21:29:06.266335    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.004996979s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-709000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (188.44s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E0718 21:29:33.958908    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (3m8.437711349s)
--- PASS: TestNetworkPlugins/group/calico/Start (188.44s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-347000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-347000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (127.333112ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (61.59s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
E0718 21:30:10.574273    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m1.587365918s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (61.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-709000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-709000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-nvdkk" [16998d0b-2d46-480a-be8b-c3387cbfa369] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-nvdkk" [16998d0b-2d46-480a-be8b-c3387cbfa369] Running
E0718 21:31:13.879885    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004107837s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-709000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-bkczg" [468c9ad2-2b5f-4618-8e36-beb1c1fd848b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006131263s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-709000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (11.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-709000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-cdmbk" [6b888bc1-36e4-4746-ac7e-53e3ffaefa20] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-cdmbk" [6b888bc1-36e4-4746-ac7e-53e3ffaefa20] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 11.004272098s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (11.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-709000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (70.61s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m10.60893514s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (70.61s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (60.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E0718 21:33:58.876615    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
E0718 21:34:01.438797    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
E0718 21:34:06.276624    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:34:06.560595    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
E0718 21:34:16.801866    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m0.224435535s)
--- PASS: TestNetworkPlugins/group/flannel/Start (60.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-h82rk" [c77d2686-4d1f-4008-8f68-cb40f27060d3] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003464123s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-709000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-709000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-7v4h6" [3753a515-c79a-4fc6-978a-d9b646400ec3] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-7v4h6" [3753a515-c79a-4fc6-978a-d9b646400ec3] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.00447596s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-709000 exec deployment/netcat -- nslookup kubernetes.default
E0718 21:34:37.284626    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (207.85s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (3m27.846043643s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (207.85s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-255ms" [3c0cd201-af1b-472a-85c9-e1aa83a316ac] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.003077057s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-709000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (9.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-709000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-xfhcm" [47d278da-e6d1-4427-b67f-33544e3b698c] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-xfhcm" [47d278da-e6d1-4427-b67f-33544e3b698c] Running
E0718 21:35:10.584067    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 9.004933042s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (9.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-709000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (89.34s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0718 21:36:06.292386    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:06.298874    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:06.309033    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:06.330175    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:06.372264    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:06.454479    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:06.614907    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:06.935816    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:07.577405    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:08.859067    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:11.419650    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:13.889423    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 21:36:16.540075    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:26.781063    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:36:40.170787    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
E0718 21:36:47.263113    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m29.342494497s)
--- PASS: TestNetworkPlugins/group/bridge/Start (89.34s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-709000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-709000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-shqmr" [0ed094a1-0291-4cbf-b0a1-5f29d7b01d87] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-shqmr" [0ed094a1-0291-4cbf-b0a1-5f29d7b01d87] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 12.003798189s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-709000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (52.37s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0718 21:37:33.060481    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:33.065776    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:33.076793    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:33.097762    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:33.138773    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:33.220023    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:33.380207    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:33.700831    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:34.342520    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:35.623855    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:38.185416    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:43.306316    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:37:53.547133    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:38:14.029126    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-709000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (52.366648876s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (52.37s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-709000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-709000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-f48v4" [c994614d-42cd-4d93-981a-2349ab91db3e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-f48v4" [c994614d-42cd-4d93-981a-2349ab91db3e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004343563s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-709000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (12.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-709000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zgrh2" [ef10c8c1-8540-4dd3-8da7-20e9390f4f1f] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zgrh2" [ef10c8c1-8540-4dd3-8da7-20e9390f4f1f] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 12.002244665s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (12.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-709000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.09s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-709000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-709000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)
E0718 21:57:33.064108    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (167.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-738000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-738000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m47.685363709s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (167.69s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (181.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-488000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
E0718 21:38:54.991945    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:38:56.325019    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
E0718 21:39:06.286297    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:39:18.977976    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:18.983179    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:18.993520    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:19.013670    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:19.054029    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:19.134975    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:19.295091    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:19.615980    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:20.256497    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:21.537802    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:24.016112    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
E0718 21:39:24.098397    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:29.218692    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:39.459280    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:39:58.930623    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:39:58.937064    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:39:58.947454    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:39:58.969676    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:39:59.011934    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:39:59.092129    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:39:59.252989    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:39:59.573984    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:39:59.942198    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:40:00.215013    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:40:01.495460    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:40:04.057856    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:40:09.179553    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:40:10.591904    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 21:40:16.916071    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:40:19.422236    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:40:29.340315    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:40:39.903916    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:40:40.904382    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:41:06.302496    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:41:13.898915    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 21:41:20.865746    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:41:33.994964    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-488000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: (3m1.411880533s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (181.41s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.36s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-738000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [47a2f938-5085-4a9f-bde9-466bdd3e8ab9] Pending
helpers_test.go:344: "busybox" [47a2f938-5085-4a9f-bde9-466bdd3e8ab9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [47a2f938-5085-4a9f-bde9-466bdd3e8ab9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.003939869s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-738000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.36s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-738000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-738000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.82s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.46s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-738000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-738000 --alsologtostderr -v=3: (8.455436169s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.46s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-488000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [94659ceb-746d-4595-bdfc-7868713e5b97] Pending
helpers_test.go:344: "busybox" [94659ceb-746d-4595-bdfc-7868713e5b97] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [94659ceb-746d-4595-bdfc-7868713e5b97] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 8.003576358s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-488000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-738000 -n old-k8s-version-738000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-738000 -n old-k8s-version-738000: exit status 7 (67.05096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-738000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (404.1s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-738000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0718 21:42:01.677362    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:01.682691    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:01.692749    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:01.713509    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:01.754459    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:01.835282    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:01.995629    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:02.316293    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:02.828203    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:42:02.957973    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-738000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m43.931319778s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-738000 -n old-k8s-version-738000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (404.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-488000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-488000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-488000 --alsologtostderr -v=3
E0718 21:42:04.238240    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:06.799721    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:11.920907    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-488000 --alsologtostderr -v=3: (8.406902189s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-488000 -n embed-certs-488000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-488000 -n embed-certs-488000: exit status 7 (66.623152ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-488000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (428.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-488000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
E0718 21:42:22.162672    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:33.069920    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:42:36.962182    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 21:42:42.644790    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:42:42.788476    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:43:00.763488    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:43:23.160211    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:23.166670    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:23.178185    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:23.198277    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:23.240413    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:23.320890    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:23.482007    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:23.607528    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:43:23.802715    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:24.052583    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:24.059022    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:24.069339    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:24.089584    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:24.130785    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:24.211840    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:24.372743    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:24.443498    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:24.694972    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:25.335423    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:25.724585    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:26.615643    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:28.286977    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:29.177256    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:33.408823    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:34.299035    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:43.651480    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:43:44.539638    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:43:56.333392    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
E0718 21:44:04.133640    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:44:05.020592    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:44:06.295766    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:44:18.989662    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:44:45.095925    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:44:45.530555    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:44:45.983076    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:44:46.673828    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:44:58.940225    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:45:10.601748    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 21:45:26.634286    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:46:06.310432    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:46:07.019119    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:46:07.905751    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:46:13.906109    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 21:47:01.685699    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:47:29.375759    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:47:33.079835    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:48:23.168009    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:48:24.062201    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-488000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: (7m8.104745381s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-488000 -n embed-certs-488000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (428.30s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xhbk2" [d9b7fb00-3f17-48ac-84bd-536cd3e1c441] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005079815s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-xhbk2" [d9b7fb00-3f17-48ac-84bd-536cd3e1c441] Running
E0718 21:48:50.865899    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:48:51.750893    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00399744s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-738000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-738000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.86s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-738000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-738000 -n old-k8s-version-738000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-738000 -n old-k8s-version-738000: exit status 2 (163.424929ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-738000 -n old-k8s-version-738000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-738000 -n old-k8s-version-738000: exit status 2 (162.41068ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-738000 --alsologtostderr -v=1
E0718 21:48:56.342361    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-738000 -n old-k8s-version-738000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-738000 -n old-k8s-version-738000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.86s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (59.47s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-855000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0718 21:49:06.303759    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:49:18.996558    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-855000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (59.466628125s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (59.47s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-g89ld" [bfe18045-94c3-4538-93c0-e80672bcca2a] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002205806s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-g89ld" [bfe18045-94c3-4538-93c0-e80672bcca2a] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004022366s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-488000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-488000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (1.99s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-488000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-488000 -n embed-certs-488000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-488000 -n embed-certs-488000: exit status 2 (188.922001ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-488000 -n embed-certs-488000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-488000 -n embed-certs-488000: exit status 2 (169.36096ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-488000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-488000 -n embed-certs-488000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-488000 -n embed-certs-488000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (1.99s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.59s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-230000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
E0718 21:49:58.949011    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-230000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: (54.590031934s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (54.59s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-855000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [dc4578b1-d3e5-45ee-9ce7-d17a7bb41585] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [dc4578b1-d3e5-45ee-9ce7-d17a7bb41585] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 8.00342213s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-855000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-855000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
E0718 21:50:10.611557    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-855000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.75s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.54s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-855000 --alsologtostderr -v=3
E0718 21:50:19.398330    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-855000 --alsologtostderr -v=3: (8.54332259s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.54s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-855000 -n no-preload-855000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-855000 -n no-preload-855000: exit status 7 (65.509682ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-855000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (315.64s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-855000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-855000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (5m15.475410618s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-855000 -n no-preload-855000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (315.64s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-230000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [2d7ba746-e812-473f-8b53-a029b45b1fa9] Pending
helpers_test.go:344: "busybox" [2d7ba746-e812-473f-8b53-a029b45b1fa9] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [2d7ba746-e812-473f-8b53-a029b45b1fa9] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.003297647s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-230000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.22s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-230000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-230000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.81s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-230000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-230000 --alsologtostderr -v=3: (8.412296669s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-230000 -n default-k8s-diff-port-230000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-230000 -n default-k8s-diff-port-230000: exit status 7 (65.867199ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-230000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (408.26s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-230000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
E0718 21:51:06.319610    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:51:13.916582    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 21:51:33.672623    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
E0718 21:51:40.802915    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:40.808069    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:40.818524    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:40.839081    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:40.879208    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:40.961384    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:41.123639    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:41.443879    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:42.086223    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:43.366587    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:45.927650    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:51:51.048062    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:52:01.288726    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:52:01.695366    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:52:21.771634    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:52:29.376873    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:52:33.088517    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:53:02.734824    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:53:23.177839    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/enable-default-cni-709000/client.crt: no such file or directory
E0718 21:53:24.070834    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kubenet-709000/client.crt: no such file or directory
E0718 21:53:56.145089    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/calico-709000/client.crt: no such file or directory
E0718 21:53:56.352527    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/auto-709000/client.crt: no such file or directory
E0718 21:54:06.311863    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
E0718 21:54:19.007696    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
E0718 21:54:24.657611    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:54:58.957415    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
E0718 21:55:10.609429    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/functional-345000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-230000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: (6m48.092404387s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-230000 -n default-k8s-diff-port-230000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (408.26s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-dddxs" [689ab8ad-40f4-45a4-935b-b534ca86378e] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.005516712s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-dddxs" [689ab8ad-40f4-45a4-935b-b534ca86378e] Running
E0718 21:55:42.027218    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/kindnet-709000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.004671141s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-855000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-855000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (1.93s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-855000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-855000 -n no-preload-855000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-855000 -n no-preload-855000: exit status 2 (162.055227ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-855000 -n no-preload-855000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-855000 -n no-preload-855000: exit status 2 (163.743233ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-855000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-855000 -n no-preload-855000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-855000 -n no-preload-855000
--- PASS: TestStartStop/group/no-preload/serial/Pause (1.93s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (41.48s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0718 21:56:06.297596    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/custom-flannel-709000/client.crt: no such file or directory
E0718 21:56:13.894068    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
E0718 21:56:21.982867    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/flannel-709000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (41.48475205s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (41.48s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-330000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-330000 --alsologtostderr -v=3
E0718 21:56:40.782026    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-330000 --alsologtostderr -v=3: (8.439683597s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.44s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000: exit status 7 (68.14681ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-330000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (29.32s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0718 21:57:01.672070    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/bridge-709000/client.crt: no such file or directory
E0718 21:57:08.471581    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/old-k8s-version-738000/client.crt: no such file or directory
E0718 21:57:09.339224    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/skaffold-456000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-330000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (29.106290555s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-330000 -n newest-cni-330000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (29.32s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-330000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.79s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-330000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-330000 -n newest-cni-330000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-330000 -n newest-cni-330000: exit status 2 (165.750879ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-330000 -n newest-cni-330000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-330000 -n newest-cni-330000: exit status 2 (163.818217ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-330000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-330000 -n newest-cni-330000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-330000 -n newest-cni-330000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.79s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-tgjt8" [7ce1c41d-7d83-4c2c-ab9d-2dcd3ea1ca1b] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.00396588s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-tgjt8" [7ce1c41d-7d83-4c2c-ab9d-2dcd3ea1ca1b] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00315434s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-230000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-230000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.03s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-230000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-230000 -n default-k8s-diff-port-230000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-230000 -n default-k8s-diff-port-230000: exit status 2 (167.086736ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-230000 -n default-k8s-diff-port-230000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-230000 -n default-k8s-diff-port-230000: exit status 2 (160.445466ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-230000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-230000 -n default-k8s-diff-port-230000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-230000 -n default-k8s-diff-port-230000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.03s)

                                                
                                    

Test skip (23/345)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (14.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2547897166/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (140.343104ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (132.355943ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (124.633651ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (124.973434ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (125.016099ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (131.217287ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0718 20:41:13.770166    1948 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19302-1411/.minikube/profiles/addons-719000/client.crt: no such file or directory
2024/07/18 20:41:15 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (168.259289ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (124.433895ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:251: skipping: mount did not appear, likely because macOS requires prompt to allow non-code signed binaries to listen on non-localhost port
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-345000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-345000 ssh "sudo umount -f /mount-9p": exit status 1 (127.216511ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-345000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-345000 /var/folders/0y/_8hvl7v13q38_kkh25vpxkz00000gp/T/TestFunctionalparallelMountCmdspecific-port2547897166/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- SKIP: TestFunctional/parallel/MountCmd/specific-port (14.78s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.68s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-709000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-709000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-709000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-709000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-709000"

                                                
                                                
----------------------- debugLogs end: cilium-709000 [took: 5.462394907s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-709000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-709000
--- SKIP: TestNetworkPlugins/group/cilium (5.68s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-793000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-793000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.22s)

                                                
                                    
Copied to clipboard