Test Report: Hyper-V_Windows 20598

                    
                      63c1754226199ce281e4ac8e931674d5ef457043:2025-04-07:39038
                    
                

Test fail (13/209)

x
+
TestCertOptions (447.38s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-options-647200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p cert-options-647200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv: exit status 90 (6m12.0126208s)

                                                
                                                
-- stdout --
	* [cert-options-647200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "cert-options-647200" primary control-plane node in "cert-options-647200" cluster
	* Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 07 15:15:22 cert-options-647200 systemd[1]: Starting Docker Application Container Engine...
	Apr 07 15:15:22 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:22.985227236Z" level=info msg="Starting up"
	Apr 07 15:15:22 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:22.986392102Z" level=info msg="containerd not running, starting managed containerd"
	Apr 07 15:15:22 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:22.987301253Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=658
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.020161444Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.046740453Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.046787256Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.046845759Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.046880761Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.046954065Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.047041369Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.047217779Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.047315484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.047337385Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.047348786Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.047435790Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.047896815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.051174089Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.051307496Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.051473604Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.052231445Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.054036940Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.055193102Z" level=info msg="metadata content store policy set" policy=shared
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.085636917Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.085784724Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.085953333Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.085996136Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.086015837Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.086143843Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.086709573Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.086993588Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087095194Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087118195Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087136996Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087153197Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087173198Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087190399Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087208600Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087223601Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087238101Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087252802Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087295604Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087312205Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087326706Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087341407Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087371709Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087386009Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087398610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087411911Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087442012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087467514Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087487815Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087505616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087519116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087535017Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087556718Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087570419Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087582520Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087667024Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087707626Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087722827Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087736828Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087748129Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087762529Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.087773730Z" level=info msg="NRI interface is disabled by configuration."
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.088198152Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.088336560Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.088407063Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 07 15:15:23 cert-options-647200 dockerd[658]: time="2025-04-07T15:15:23.088434865Z" level=info msg="containerd successfully booted in 0.069604s"
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.054344821Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.081486270Z" level=info msg="Loading containers: start."
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.231460828Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.455311788Z" level=info msg="Loading containers: done."
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.481351540Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.481392042Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.481414443Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.481665956Z" level=info msg="Daemon has completed initialization"
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.577658339Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 07 15:15:24 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:24.577776245Z" level=info msg="API listen on [::]:2376"
	Apr 07 15:15:24 cert-options-647200 systemd[1]: Started Docker Application Container Engine.
	Apr 07 15:15:55 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:55.378023971Z" level=info msg="Processing signal 'terminated'"
	Apr 07 15:15:55 cert-options-647200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 07 15:15:55 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:55.380434777Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 07 15:15:55 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:55.380649477Z" level=info msg="Daemon shutdown complete"
	Apr 07 15:15:55 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:55.380692077Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 07 15:15:55 cert-options-647200 dockerd[652]: time="2025-04-07T15:15:55.380830377Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 07 15:15:56 cert-options-647200 systemd[1]: docker.service: Deactivated successfully.
	Apr 07 15:15:56 cert-options-647200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 07 15:15:56 cert-options-647200 systemd[1]: Starting Docker Application Container Engine...
	Apr 07 15:15:56 cert-options-647200 dockerd[1072]: time="2025-04-07T15:15:56.438381884Z" level=info msg="Starting up"
	Apr 07 15:16:56 cert-options-647200 dockerd[1072]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 07 15:16:56 cert-options-647200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 07 15:16:56 cert-options-647200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 07 15:16:56 cert-options-647200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p cert-options-647200 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperv" : exit status 90
cert_options_test.go:60: (dbg) Run:  out/minikube-windows-amd64.exe -p cert-options-647200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p cert-options-647200 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 1 (9.8095811s)

                                                
                                                
-- stdout --
	Can't open /var/lib/minikube/certs/apiserver.crt for reading, No such file or directory
	140344538615872:error:02001002:system library:fopen:No such file or directory:crypto/bio/bss_file.c:69:fopen('/var/lib/minikube/certs/apiserver.crt','r')
	140344538615872:error:2006D080:BIO routines:BIO_new_file:no such file:crypto/bio/bss_file.c:76:
	unable to load certificate

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-windows-amd64.exe -p cert-options-647200 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 1
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-647200 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters:\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Mon, 07 Apr 2025 15:08:27 UTC\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.35.0\n\t      name: cluster_info\n\t    server: https://172.17.86.101:8443\n\t  name: cert-expiration-287100\n\t- cluster:\n\t    certificate-authority: C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Mon, 07 Apr 2025 13:00:03 UTC\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.35.0\n\t      name: cluster_info\n\t    server: https://172.17.95.254:8443\n\t  name: ha-573100\n\tcontexts:\n\t- context:\n\t    cluster: cert-expiration-287100\n\t    extensions:\n\t    - extension:\n\t        last-update: Mon, 07 Apr 2025 15:08:27 UTC\n\t        provider: minikube.sigs.k
8s.io\n\t        version: v1.35.0\n\t      name: context_info\n\t    namespace: default\n\t    user: cert-expiration-287100\n\t  name: cert-expiration-287100\n\t- context:\n\t    cluster: ha-573100\n\t    extensions:\n\t    - extension:\n\t        last-update: Mon, 07 Apr 2025 13:00:03 UTC\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.35.0\n\t      name: context_info\n\t    namespace: default\n\t    user: ha-573100\n\t  name: ha-573100\n\tcurrent-context: \"\"\n\tkind: Config\n\tpreferences: {}\n\tusers:\n\t- name: cert-expiration-287100\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\cert-expiration-287100\\client.crt\n\t    client-key: C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\cert-expiration-287100\\client.key\n\t- name: ha-573100\n\t  user:\n\t    client-certificate: C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.crt\n\t    client-key: C:\\Users\\jenk
ins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.key\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p cert-options-647200 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-windows-amd64.exe ssh -p cert-options-647200 -- "sudo cat /etc/kubernetes/admin.conf": exit status 1 (10.187812s)

                                                
                                                
-- stdout --
	cat: /etc/kubernetes/admin.conf: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-windows-amd64.exe ssh -p cert-options-647200 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 1
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	cat: /etc/kubernetes/admin.conf: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2025-04-07 15:17:16.678611 +0000 UTC m=+10699.919064101
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-647200 -n cert-options-647200
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p cert-options-647200 -n cert-options-647200: exit status 6 (13.5790841s)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 15:17:30.156170    9028 status.go:458] kubeconfig endpoint: get endpoint: "cert-options-647200" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "cert-options-647200" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "cert-options-647200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-options-647200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-options-647200: (41.6429604s)
--- FAIL: TestCertOptions (447.38s)

                                                
                                    
x
+
TestErrorSpam/setup (193.76s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -p nospam-276800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 --driver=hyperv
error_spam_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -p nospam-276800 -n=1 --memory=2250 --wait=false --log_dir=C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 --driver=hyperv: (3m13.7599971s)
error_spam_test.go:96: unexpected stderr: "! Failing to connect to https://registry.k8s.io/ from inside the minikube VM"
error_spam_test.go:96: unexpected stderr: "* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/"
error_spam_test.go:110: minikube stdout:
* [nospam-276800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
- KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
- MINIKUBE_LOCATION=20598
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
* Using the hyperv driver based on user configuration
* Starting "nospam-276800" primary control-plane node in "nospam-276800" cluster
* Creating hyperv VM (CPUs=2, Memory=2250MB, Disk=20000MB) ...
* Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
- Generating certificates and keys ...
- Booting up control plane ...
- Configuring RBAC rules ...
* Configuring bridge CNI (Container Networking Interface) ...
* Verifying Kubernetes components...
- Using image gcr.io/k8s-minikube/storage-provisioner:v5
* Enabled addons: storage-provisioner, default-storageclass
* Done! kubectl is now configured to use "nospam-276800" cluster and "default" namespace by default
error_spam_test.go:111: minikube stderr:
! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
--- FAIL: TestErrorSpam/setup (193.76s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1526: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 service --namespace=default --https --url hello-node
functional_test.go:1526: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-168700 service --namespace=default --https --url hello-node: exit status 1 (15.0113034s)
functional_test.go:1528: failed to get service url. args "out/minikube-windows-amd64.exe -p functional-168700 service --namespace=default --https --url hello-node" : exit status 1
--- FAIL: TestFunctional/parallel/ServiceCmd/HTTPS (15.01s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1557: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 service hello-node --url --format={{.IP}}
functional_test.go:1557: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-168700 service hello-node --url --format={{.IP}}: exit status 1 (15.0179666s)
functional_test.go:1559: failed to get service url with custom format. args "out/minikube-windows-amd64.exe -p functional-168700 service hello-node --url --format={{.IP}}": exit status 1
functional_test.go:1565: "" is not a valid IP
--- FAIL: TestFunctional/parallel/ServiceCmd/Format (15.02s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1576: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 service hello-node --url
functional_test.go:1576: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-168700 service hello-node --url: exit status 1 (15.0167747s)
functional_test.go:1578: failed to get service url. args: "out/minikube-windows-amd64.exe -p functional-168700 service hello-node --url": exit status 1
functional_test.go:1582: found endpoint for hello-node: 
functional_test.go:1590: expected scheme to be -"http"- got scheme: *""*
--- FAIL: TestFunctional/parallel/ServiceCmd/URL (15.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (69.15s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-gtkbk -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-gtkbk -- sh -c "ping -c 1 172.17.80.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-gtkbk -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.5119602s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.80.1) from pod (busybox-58667487b6-gtkbk): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-szx9k -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-szx9k -- sh -c "ping -c 1 172.17.80.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-szx9k -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4823663s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.80.1) from pod (busybox-58667487b6-szx9k): exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-tj2cw -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-tj2cw -- sh -c "ping -c 1 172.17.80.1"
ha_test.go:218: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-tj2cw -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4919993s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
ha_test.go:219: Failed to ping host (172.17.80.1) from pod (busybox-58667487b6-tj2cw): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-573100 -n ha-573100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-573100 -n ha-573100: (12.3957629s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 logs -n 25: (8.9445999s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	| image   | functional-168700                    | functional-168700 | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | image ls --format table              |                   |                   |         |                     |                     |
	|         | --alsologtostderr                    |                   |                   |         |                     |                     |
	| image   | functional-168700 image build -t     | functional-168700 | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	|         | localhost/my-image:functional-168700 |                   |                   |         |                     |                     |
	|         | testdata\build --alsologtostderr     |                   |                   |         |                     |                     |
	| image   | functional-168700 image ls           | functional-168700 | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:52 UTC | 07 Apr 25 12:52 UTC |
	| delete  | -p functional-168700                 | functional-168700 | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 12:56 UTC |
	| start   | -p ha-573100 --wait=true             | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:56 UTC | 07 Apr 25 13:08 UTC |
	|         | --memory=2200 --ha                   |                   |                   |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |                   |         |                     |                     |
	|         | --driver=hyperv                      |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- apply -f             | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- rollout status       | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | deployment/busybox                   |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- get pods -o          | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- get pods -o          | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-gtkbk --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-szx9k --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-tj2cw --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-gtkbk --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-szx9k --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-tj2cw --          |                   |                   |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-gtkbk -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-szx9k -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-tj2cw -- nslookup |                   |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- get pods -o          | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | jsonpath='{.items[*].metadata.name}' |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC | 07 Apr 25 13:08 UTC |
	|         | busybox-58667487b6-gtkbk             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:08 UTC |                     |
	|         | busybox-58667487b6-gtkbk -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:09 UTC | 07 Apr 25 13:09 UTC |
	|         | busybox-58667487b6-szx9k             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:09 UTC |                     |
	|         | busybox-58667487b6-szx9k -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1             |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:09 UTC | 07 Apr 25 13:09 UTC |
	|         | busybox-58667487b6-tj2cw             |                   |                   |         |                     |                     |
	|         | -- sh -c nslookup                    |                   |                   |         |                     |                     |
	|         | host.minikube.internal | awk         |                   |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |                   |                   |         |                     |                     |
	| kubectl | -p ha-573100 -- exec                 | ha-573100         | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:09 UTC |                     |
	|         | busybox-58667487b6-tj2cw -- sh       |                   |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1             |                   |                   |         |                     |                     |
	|---------|--------------------------------------|-------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:56:56
	Running on machine: minikube3
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:56:56.656239    7088 out.go:345] Setting OutFile to fd 1476 ...
	I0407 12:56:56.735608    7088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:56.735608    7088 out.go:358] Setting ErrFile to fd 1632...
	I0407 12:56:56.735608    7088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:56.755799    7088 out.go:352] Setting JSON to false
	I0407 12:56:56.758800    7088 start.go:129] hostinfo: {"hostname":"minikube3","uptime":2409,"bootTime":1744028207,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 12:56:56.759802    7088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 12:56:56.768668    7088 out.go:177] * [ha-573100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 12:56:56.772086    7088 notify.go:220] Checking for updates...
	I0407 12:56:56.775737    7088 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 12:56:56.778615    7088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:56:56.781592    7088 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 12:56:56.784776    7088 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:56:56.787572    7088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:56:56.790068    7088 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:57:02.018232    7088 out.go:177] * Using the hyperv driver based on user configuration
	I0407 12:57:02.022340    7088 start.go:297] selected driver: hyperv
	I0407 12:57:02.022340    7088 start.go:901] validating driver "hyperv" against <nil>
	I0407 12:57:02.022340    7088 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:57:02.069649    7088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:57:02.071468    7088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:57:02.071617    7088 cni.go:84] Creating CNI manager for ""
	I0407 12:57:02.071691    7088 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0407 12:57:02.071691    7088 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0407 12:57:02.071880    7088 start.go:340] cluster config:
	{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:57:02.072172    7088 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:57:02.076951    7088 out.go:177] * Starting "ha-573100" primary control-plane node in "ha-573100" cluster
	I0407 12:57:02.079104    7088 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:57:02.079632    7088 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 12:57:02.079632    7088 cache.go:56] Caching tarball of preloaded images
	I0407 12:57:02.079882    7088 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 12:57:02.079882    7088 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 12:57:02.080979    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 12:57:02.080979    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json: {Name:mkee596f205fc528f696d7e985c07299fecd44dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:57:02.082308    7088 start.go:360] acquireMachinesLock for ha-573100: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 12:57:02.082308    7088 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-573100"
	I0407 12:57:02.082308    7088 start.go:93] Provisioning new machine with config: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-573100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 12:57:02.082308    7088 start.go:125] createHost starting for "" (driver="hyperv")
	I0407 12:57:02.086455    7088 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 12:57:02.086455    7088 start.go:159] libmachine.API.Create for "ha-573100" (driver="hyperv")
	I0407 12:57:02.086455    7088 client.go:168] LocalClient.Create starting
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Parsing certificate...
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Parsing certificate...
	I0407 12:57:02.087506    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 12:57:04.112859    7088 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 12:57:04.113042    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:04.113111    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 12:57:05.726263    7088 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 12:57:05.727089    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:05.727089    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 12:57:07.154218    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 12:57:07.154523    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:07.154523    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 12:57:10.589998    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 12:57:10.589998    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:10.592051    7088 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 12:57:11.106028    7088 main.go:141] libmachine: Creating SSH key...
	I0407 12:57:11.374353    7088 main.go:141] libmachine: Creating VM...
	I0407 12:57:11.374353    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 12:57:14.119317    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 12:57:14.119381    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:14.119441    7088 main.go:141] libmachine: Using switch "Default Switch"
	I0407 12:57:14.119502    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 12:57:15.846969    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 12:57:15.847147    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:15.847205    7088 main.go:141] libmachine: Creating VHD
	I0407 12:57:15.847205    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 12:57:19.573858    7088 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E1DD200D-A45E-4B28-A627-5E3F3FBE7F93
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 12:57:19.573858    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:19.574146    7088 main.go:141] libmachine: Writing magic tar header
	I0407 12:57:19.574264    7088 main.go:141] libmachine: Writing SSH key tar header
	I0407 12:57:19.587836    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 12:57:22.694128    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:22.694567    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:22.694567    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\disk.vhd' -SizeBytes 20000MB
	I0407 12:57:25.212818    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:25.213338    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:25.213412    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-573100 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0407 12:57:28.719478    7088 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-573100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 12:57:28.719527    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:28.719527    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-573100 -DynamicMemoryEnabled $false
	I0407 12:57:30.946657    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:30.946912    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:30.946912    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-573100 -Count 2
	I0407 12:57:33.077863    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:33.077863    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:33.077863    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-573100 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\boot2docker.iso'
	I0407 12:57:35.632234    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:35.632234    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:35.632234    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-573100 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\disk.vhd'
	I0407 12:57:38.181883    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:38.181883    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:38.181883    7088 main.go:141] libmachine: Starting VM...
	I0407 12:57:38.181955    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-573100
	I0407 12:57:41.204491    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:41.204491    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:41.205063    7088 main.go:141] libmachine: Waiting for host to start...
	I0407 12:57:41.205063    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:57:43.431004    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:57:43.431395    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:43.431606    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:57:45.920193    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:45.920193    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:46.921532    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:57:49.099498    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:57:49.100524    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:49.100524    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:57:51.579432    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:51.579518    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:52.580131    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:57:54.732864    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:57:54.732864    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:54.733266    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:57:57.277325    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:57.277325    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:58.278890    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:00.458453    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:00.458453    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:00.458779    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:02.953414    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:58:02.953414    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:03.953667    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:06.160201    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:06.160201    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:06.160427    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:08.683628    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:08.684366    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:08.684492    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:10.784582    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:10.785357    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:10.785408    7088 machine.go:93] provisionDockerMachine start ...
	I0407 12:58:10.785408    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:12.887155    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:12.887155    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:12.887657    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:15.337513    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:15.337513    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:15.343727    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:15.359815    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:15.359815    7088 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 12:58:15.488436    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 12:58:15.488556    7088 buildroot.go:166] provisioning hostname "ha-573100"
	I0407 12:58:15.488683    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:17.580405    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:17.580946    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:17.580946    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:19.992826    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:19.992958    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:19.997864    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:19.998596    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:19.998596    7088 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-573100 && echo "ha-573100" | sudo tee /etc/hostname
	I0407 12:58:20.161609    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-573100
	
	I0407 12:58:20.161609    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:22.240077    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:22.240077    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:22.240873    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:24.682344    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:24.682344    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:24.688821    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:24.689564    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:24.689564    7088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-573100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-573100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-573100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 12:58:24.843318    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:58:24.843318    7088 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 12:58:24.843318    7088 buildroot.go:174] setting up certificates
	I0407 12:58:24.843318    7088 provision.go:84] configureAuth start
	I0407 12:58:24.843318    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:26.895365    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:26.895365    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:26.896258    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:29.342609    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:29.342870    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:29.342870    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:31.410057    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:31.410146    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:31.410146    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:33.899865    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:33.900440    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:33.900519    7088 provision.go:143] copyHostCerts
	I0407 12:58:33.900519    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 12:58:33.901257    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 12:58:33.901257    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 12:58:33.901257    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 12:58:33.902518    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 12:58:33.903166    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 12:58:33.903210    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 12:58:33.903601    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 12:58:33.906270    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 12:58:33.906638    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 12:58:33.906681    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 12:58:33.907041    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 12:58:33.908163    7088 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-573100 san=[127.0.0.1 172.17.95.223 ha-573100 localhost minikube]
	I0407 12:58:34.284036    7088 provision.go:177] copyRemoteCerts
	I0407 12:58:34.296897    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 12:58:34.296897    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:36.307926    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:36.307981    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:36.308066    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:38.820032    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:38.820032    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:38.820975    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 12:58:38.923780    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6268653s)
	I0407 12:58:38.923780    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 12:58:38.924346    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 12:58:38.966386    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 12:58:38.966386    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0407 12:58:39.010525    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 12:58:39.010525    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 12:58:39.056348    7088 provision.go:87] duration metric: took 14.2129735s to configureAuth
	I0407 12:58:39.056348    7088 buildroot.go:189] setting minikube options for container-runtime
	I0407 12:58:39.056979    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:58:39.056979    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:41.157468    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:41.157468    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:41.157569    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:43.588373    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:43.588970    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:43.594044    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:43.595149    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:43.595149    7088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 12:58:43.718879    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 12:58:43.718879    7088 buildroot.go:70] root file system type: tmpfs
	I0407 12:58:43.719860    7088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 12:58:43.719860    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:45.744799    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:45.745800    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:45.745800    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:48.151778    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:48.151778    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:48.157643    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:48.158263    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:48.158852    7088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 12:58:48.322038    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 12:58:48.322681    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:50.387974    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:50.388228    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:50.388228    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:52.832791    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:52.833817    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:52.838799    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:52.839060    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:52.839060    7088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 12:58:55.024916    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 12:58:55.024916    7088 machine.go:96] duration metric: took 44.23933s to provisionDockerMachine
	I0407 12:58:55.024916    7088 client.go:171] duration metric: took 1m52.9380074s to LocalClient.Create
	I0407 12:58:55.024916    7088 start.go:167] duration metric: took 1m52.9380074s to libmachine.API.Create "ha-573100"
	I0407 12:58:55.024916    7088 start.go:293] postStartSetup for "ha-573100" (driver="hyperv")
	I0407 12:58:55.024916    7088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 12:58:55.037839    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 12:58:55.038353    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:57.111631    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:57.112451    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:57.112621    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:59.543898    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:59.544967    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:59.545264    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 12:58:59.660093    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6222349s)
	I0407 12:58:59.671846    7088 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 12:58:59.681362    7088 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 12:58:59.681503    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 12:58:59.682368    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 12:58:59.683827    7088 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 12:58:59.683919    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 12:58:59.695801    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 12:58:59.712595    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 12:58:59.756674    7088 start.go:296] duration metric: took 4.7317392s for postStartSetup
	I0407 12:58:59.759800    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:01.805263    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:01.805263    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:01.805263    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:04.255011    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:04.255196    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:04.255196    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 12:59:04.258180    7088 start.go:128] duration metric: took 2m2.1753813s to createHost
	I0407 12:59:04.258254    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:06.315549    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:06.315549    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:06.316582    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:08.867920    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:08.867920    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:08.873756    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:59:08.874511    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:59:08.874511    7088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 12:59:09.016590    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744030749.029228276
	
	I0407 12:59:09.016665    7088 fix.go:216] guest clock: 1744030749.029228276
	I0407 12:59:09.016665    7088 fix.go:229] Guest: 2025-04-07 12:59:09.029228276 +0000 UTC Remote: 2025-04-07 12:59:04.258254 +0000 UTC m=+127.699871101 (delta=4.770974276s)
	I0407 12:59:09.016803    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:11.130936    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:11.131960    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:11.131960    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:13.631084    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:13.631084    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:13.638672    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:59:13.639427    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:59:13.639427    7088 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744030749
	I0407 12:59:13.793259    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 12:59:09 UTC 2025
	
	I0407 12:59:13.793316    7088 fix.go:236] clock set: Mon Apr  7 12:59:09 UTC 2025
	 (err=<nil>)
	I0407 12:59:13.793373    7088 start.go:83] releasing machines lock for "ha-573100", held for 2m11.7105353s
	I0407 12:59:13.793684    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:15.913439    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:15.913898    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:15.913898    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:18.358112    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:18.358112    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:18.362986    7088 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 12:59:18.362986    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:18.372488    7088 ssh_runner.go:195] Run: cat /version.json
	I0407 12:59:18.372488    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:20.548439    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:20.548886    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:20.548886    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:20.548886    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:20.548886    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:20.549086    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:23.189509    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:23.189509    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:23.189751    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 12:59:23.209875    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:23.209875    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:23.209875    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 12:59:23.289922    7088 ssh_runner.go:235] Completed: cat /version.json: (4.9174134s)
	I0407 12:59:23.300131    7088 ssh_runner.go:195] Run: systemctl --version
	I0407 12:59:23.305113    7088 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9421064s)
	W0407 12:59:23.305113    7088 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 12:59:23.321796    7088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 12:59:23.330868    7088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 12:59:23.340640    7088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 12:59:23.366008    7088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 12:59:23.366008    7088 start.go:495] detecting cgroup driver to use...
	I0407 12:59:23.366094    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:59:23.413088    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 12:59:23.442041    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 12:59:23.460440    7088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	W0407 12:59:23.463963    7088 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 12:59:23.463963    7088 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 12:59:23.472444    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 12:59:23.506793    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:59:23.536110    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 12:59:23.564748    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:59:23.594692    7088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 12:59:23.627833    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 12:59:23.656081    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 12:59:23.685167    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 12:59:23.713037    7088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 12:59:23.729645    7088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 12:59:23.740804    7088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 12:59:23.769828    7088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 12:59:23.794874    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:23.966703    7088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 12:59:23.993450    7088 start.go:495] detecting cgroup driver to use...
	I0407 12:59:24.004708    7088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 12:59:24.041488    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 12:59:24.077912    7088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 12:59:24.115551    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 12:59:24.149725    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 12:59:24.182069    7088 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 12:59:24.241507    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 12:59:24.265908    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:59:24.312108    7088 ssh_runner.go:195] Run: which cri-dockerd
	I0407 12:59:24.328142    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 12:59:24.347177    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 12:59:24.387877    7088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 12:59:24.576285    7088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 12:59:24.757301    7088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 12:59:24.757524    7088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 12:59:24.800260    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:24.981084    7088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 12:59:27.550187    7088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5689591s)
	I0407 12:59:27.561136    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 12:59:27.594889    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:59:27.629182    7088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 12:59:27.812832    7088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 12:59:27.990048    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:28.172135    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 12:59:28.213110    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:59:28.246922    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:28.434067    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 12:59:28.528385    7088 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 12:59:28.538278    7088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 12:59:28.547059    7088 start.go:563] Will wait 60s for crictl version
	I0407 12:59:28.557499    7088 ssh_runner.go:195] Run: which crictl
	I0407 12:59:28.572978    7088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 12:59:28.621588    7088 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 12:59:28.629647    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 12:59:28.673236    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 12:59:28.713048    7088 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 12:59:28.713234    7088 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 12:59:28.717774    7088 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 12:59:28.717774    7088 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 12:59:28.717774    7088 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 12:59:28.717774    7088 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 12:59:28.720002    7088 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 12:59:28.721071    7088 ip.go:214] interface addr: 172.17.80.1/20
	I0407 12:59:28.731720    7088 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 12:59:28.736346    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:59:28.769852    7088 kubeadm.go:883] updating cluster {Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespac
e:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 12:59:28.769852    7088 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:59:28.777251    7088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 12:59:28.802686    7088 docker.go:689] Got preloaded images: 
	I0407 12:59:28.802744    7088 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0407 12:59:28.813455    7088 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0407 12:59:28.841947    7088 ssh_runner.go:195] Run: which lz4
	I0407 12:59:28.848687    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0407 12:59:28.858634    7088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 12:59:28.865059    7088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 12:59:28.865089    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0407 12:59:30.745515    7088 docker.go:653] duration metric: took 1.8964218s to copy over tarball
	I0407 12:59:30.755951    7088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 12:59:39.699486    7088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9434985s)
	I0407 12:59:39.699486    7088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 12:59:39.761441    7088 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0407 12:59:39.779545    7088 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0407 12:59:39.821149    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:40.027982    7088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 12:59:43.098176    7088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0701815s)
	I0407 12:59:43.106156    7088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 12:59:43.134156    7088 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 12:59:43.134156    7088 cache_images.go:84] Images are preloaded, skipping loading
	I0407 12:59:43.134156    7088 kubeadm.go:934] updating node { 172.17.95.223 8443 v1.32.2 docker true true} ...
	I0407 12:59:43.134156    7088 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-573100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.95.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 12:59:43.143699    7088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 12:59:43.206593    7088 cni.go:84] Creating CNI manager for ""
	I0407 12:59:43.206593    7088 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0407 12:59:43.206593    7088 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 12:59:43.206593    7088 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.95.223 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-573100 NodeName:ha-573100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.95.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.95.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 12:59:43.206593    7088 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.95.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-573100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.17.95.223"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.95.223"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 12:59:43.206593    7088 kube-vip.go:115] generating kube-vip config ...
	I0407 12:59:43.218023    7088 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0407 12:59:43.240974    7088 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0407 12:59:43.241291    7088 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0407 12:59:43.251799    7088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 12:59:43.266720    7088 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 12:59:43.276873    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0407 12:59:43.294245    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0407 12:59:43.321785    7088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 12:59:43.348443    7088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0407 12:59:43.377702    7088 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0407 12:59:43.413733    7088 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0407 12:59:43.419693    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:59:43.447914    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:43.618339    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:59:43.642986    7088 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100 for IP: 172.17.95.223
	I0407 12:59:43.643113    7088 certs.go:194] generating shared ca certs ...
	I0407 12:59:43.643179    7088 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.643429    7088 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 12:59:43.644248    7088 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 12:59:43.644248    7088 certs.go:256] generating profile certs ...
	I0407 12:59:43.645333    7088 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key
	I0407 12:59:43.645333    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.crt with IP's: []
	I0407 12:59:43.804329    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.crt ...
	I0407 12:59:43.804329    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.crt: {Name:mk21bbd0c664861c0fe2438c1431a34ed5a9b4df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.806166    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key ...
	I0407 12:59:43.806166    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key: {Name:mkfe6f6525a808b66b9dafe2a6932dc7a7cbf405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.806982    7088 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.5c786bb7
	I0407 12:59:43.807949    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.5c786bb7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.95.223 172.17.95.254]
	I0407 12:59:43.907294    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.5c786bb7 ...
	I0407 12:59:43.907294    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.5c786bb7: {Name:mk0efb6b0c51f2e14af56446225c8d2570bd23db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.909083    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.5c786bb7 ...
	I0407 12:59:43.909083    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.5c786bb7: {Name:mk60b5c7c5a6b211d5fb373ebfb305898b65796a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.910181    7088 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.5c786bb7 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt
	I0407 12:59:43.925096    7088 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.5c786bb7 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key
	I0407 12:59:43.926087    7088 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key
	I0407 12:59:43.926087    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt with IP's: []
	I0407 12:59:44.142211    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt ...
	I0407 12:59:44.142211    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt: {Name:mk7df96e0f2dd05b3d9e0078537809f03b142a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:44.143085    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key ...
	I0407 12:59:44.143085    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key: {Name:mk3ac6e8ed7073461261aeace881e163508e3bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:44.144520    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 12:59:44.145061    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 12:59:44.145279    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 12:59:44.145444    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 12:59:44.145585    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0407 12:59:44.145618    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0407 12:59:44.145948    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0407 12:59:44.158852    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0407 12:59:44.160460    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 12:59:44.160997    7088 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 12:59:44.161116    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 12:59:44.161116    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 12:59:44.161714    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 12:59:44.162103    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 12:59:44.162344    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 12:59:44.162344    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:59:44.163063    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 12:59:44.163215    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 12:59:44.164368    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 12:59:44.206873    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 12:59:44.252029    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 12:59:44.294053    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 12:59:44.338456    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0407 12:59:44.381342    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 12:59:44.423351    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 12:59:44.465650    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 12:59:44.511069    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 12:59:44.553033    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 12:59:44.592421    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 12:59:44.635601    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 12:59:44.670389    7088 ssh_runner.go:195] Run: openssl version
	I0407 12:59:44.692726    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 12:59:44.725321    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:59:44.732486    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:59:44.744337    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:59:44.761900    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 12:59:44.789885    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 12:59:44.821332    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 12:59:44.828611    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 12:59:44.838487    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 12:59:44.860192    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 12:59:44.888302    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 12:59:44.915794    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 12:59:44.922634    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 12:59:44.932966    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 12:59:44.950247    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 12:59:44.977276    7088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 12:59:44.984360    7088 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 12:59:44.984706    7088 kubeadm.go:392] StartCluster: {Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:d
efault APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:59:44.992570    7088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 12:59:45.022692    7088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 12:59:45.057904    7088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 12:59:45.087874    7088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 12:59:45.108100    7088 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 12:59:45.108172    7088 kubeadm.go:157] found existing configuration files:
	
	I0407 12:59:45.121242    7088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 12:59:45.142498    7088 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 12:59:45.153537    7088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 12:59:45.181630    7088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 12:59:45.197479    7088 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 12:59:45.206702    7088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 12:59:45.237301    7088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 12:59:45.253432    7088 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 12:59:45.264081    7088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 12:59:45.292068    7088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 12:59:45.309360    7088 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 12:59:45.318662    7088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 12:59:45.336726    7088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 12:59:45.733165    7088 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 12:59:59.799619    7088 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 12:59:59.799774    7088 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 12:59:59.799893    7088 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 12:59:59.800145    7088 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 12:59:59.800427    7088 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 12:59:59.800427    7088 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 12:59:59.803968    7088 out.go:235]   - Generating certificates and keys ...
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 12:59:59.804954    7088 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 12:59:59.805066    7088 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 12:59:59.805066    7088 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-573100 localhost] and IPs [172.17.95.223 127.0.0.1 ::1]
	I0407 12:59:59.805066    7088 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 12:59:59.805710    7088 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-573100 localhost] and IPs [172.17.95.223 127.0.0.1 ::1]
	I0407 12:59:59.805806    7088 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 12:59:59.806060    7088 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 12:59:59.806129    7088 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 12:59:59.806292    7088 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 12:59:59.806452    7088 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 12:59:59.806589    7088 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 12:59:59.806767    7088 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 12:59:59.806767    7088 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 12:59:59.806767    7088 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 12:59:59.807296    7088 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 12:59:59.807467    7088 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 12:59:59.811495    7088 out.go:235]   - Booting up control plane ...
	I0407 12:59:59.811541    7088 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 12:59:59.811541    7088 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 12:59:59.811541    7088 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 12:59:59.812308    7088 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 12:59:59.812520    7088 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 12:59:59.812520    7088 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 12:59:59.812880    7088 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 12:59:59.812880    7088 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 12:59:59.812880    7088 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.479913ms
	I0407 12:59:59.813448    7088 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 12:59:59.813639    7088 kubeadm.go:310] [api-check] The API server is healthy after 8.001794118s
	I0407 12:59:59.813639    7088 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 12:59:59.813639    7088 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 12:59:59.813639    7088 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 12:59:59.813639    7088 kubeadm.go:310] [mark-control-plane] Marking the node ha-573100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 12:59:59.813639    7088 kubeadm.go:310] [bootstrap-token] Using token: szigwj.nfxg52i168tpi7cc
	I0407 12:59:59.821683    7088 out.go:235]   - Configuring RBAC rules ...
	I0407 12:59:59.821683    7088 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 12:59:59.821683    7088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 12:59:59.821683    7088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 12:59:59.822620    7088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 12:59:59.822620    7088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 12:59:59.822620    7088 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 12:59:59.822620    7088 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 12:59:59.822620    7088 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 12:59:59.822620    7088 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.823575    7088 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.823575    7088 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.823575    7088 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 12:59:59.823575    7088 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 12:59:59.823575    7088 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.823575    7088 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.824588    7088 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 12:59:59.824588    7088 kubeadm.go:310] 
	I0407 12:59:59.824588    7088 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 12:59:59.824588    7088 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 12:59:59.824588    7088 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 12:59:59.824588    7088 kubeadm.go:310] 
	I0407 12:59:59.824588    7088 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 12:59:59.824588    7088 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 12:59:59.824588    7088 kubeadm.go:310] 
	I0407 12:59:59.825618    7088 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token szigwj.nfxg52i168tpi7cc \
	I0407 12:59:59.825618    7088 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 \
	I0407 12:59:59.825618    7088 kubeadm.go:310] 	--control-plane 
	I0407 12:59:59.825618    7088 kubeadm.go:310] 
	I0407 12:59:59.825618    7088 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 12:59:59.825618    7088 kubeadm.go:310] 
	I0407 12:59:59.825618    7088 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token szigwj.nfxg52i168tpi7cc \
	I0407 12:59:59.825618    7088 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 
	I0407 12:59:59.825618    7088 cni.go:84] Creating CNI manager for ""
	I0407 12:59:59.826618    7088 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0407 12:59:59.829042    7088 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0407 12:59:59.844735    7088 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0407 12:59:59.852820    7088 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0407 12:59:59.852820    7088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0407 12:59:59.897638    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0407 13:00:00.580762    7088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 13:00:00.594427    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:00.595428    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-573100 minikube.k8s.io/updated_at=2025_04_07T13_00_00_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=ha-573100 minikube.k8s.io/primary=true
	I0407 13:00:00.622267    7088 ops.go:34] apiserver oom_adj: -16
	I0407 13:00:00.812514    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:01.312289    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:01.813767    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:02.310642    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:02.812313    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:03.313080    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:03.813190    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:03.953702    7088 kubeadm.go:1113] duration metric: took 3.3727954s to wait for elevateKubeSystemPrivileges
	I0407 13:00:03.953702    7088 kubeadm.go:394] duration metric: took 18.9689181s to StartCluster
	I0407 13:00:03.953702    7088 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:00:03.953702    7088 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 13:00:03.955898    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:00:03.957157    7088 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:00:03.957258    7088 start.go:241] waiting for startup goroutines ...
	I0407 13:00:03.957360    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 13:00:03.957258    7088 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:00:03.957584    7088 addons.go:69] Setting default-storageclass=true in profile "ha-573100"
	I0407 13:00:03.957584    7088 addons.go:69] Setting storage-provisioner=true in profile "ha-573100"
	I0407 13:00:03.957703    7088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-573100"
	I0407 13:00:03.957703    7088 addons.go:238] Setting addon storage-provisioner=true in "ha-573100"
	I0407 13:00:03.957703    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:00:03.957853    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:00:03.958173    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:03.959195    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:04.144719    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 13:00:04.639000    7088 start.go:971] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0407 13:00:06.312339    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:06.312555    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:06.315447    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:06.315531    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:06.315960    7088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:00:06.316424    7088 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 13:00:06.317476    7088 kapi.go:59] client config for ha-573100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 13:00:06.319008    7088 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:00:06.319008    7088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:00:06.319072    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:06.319072    7088 cert_rotation.go:140] Starting client certificate rotation controller
	I0407 13:00:06.319072    7088 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0407 13:00:06.319072    7088 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0407 13:00:06.319072    7088 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0407 13:00:06.319072    7088 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0407 13:00:06.319859    7088 addons.go:238] Setting addon default-storageclass=true in "ha-573100"
	I0407 13:00:06.319967    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:00:06.321125    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:08.710776    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:08.710776    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:08.710964    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:00:08.782925    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:08.783139    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:08.783206    7088 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:00:08.783278    7088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:00:08.783381    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:11.067757    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:11.067757    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:11.067897    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:00:11.447864    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:00:11.447864    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:11.447864    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:00:11.618298    7088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:00:13.688980    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:00:13.689872    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:13.690059    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:00:13.828462    7088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:00:13.970527    7088 round_trippers.go:470] GET https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0407 13:00:13.971499    7088 round_trippers.go:476] Request Headers:
	I0407 13:00:13.971499    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:00:13.971499    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:00:13.984644    7088 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0407 13:00:13.985632    7088 round_trippers.go:470] PUT https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0407 13:00:13.985632    7088 round_trippers.go:476] Request Headers:
	I0407 13:00:13.985632    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:00:13.985632    7088 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 13:00:13.985632    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:00:13.989947    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:00:13.992973    7088 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0407 13:00:13.999319    7088 addons.go:514] duration metric: took 10.04202s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0407 13:00:13.999420    7088 start.go:246] waiting for cluster config update ...
	I0407 13:00:13.999488    7088 start.go:255] writing updated cluster config ...
	I0407 13:00:14.003560    7088 out.go:201] 
	I0407 13:00:14.018164    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:00:14.018164    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:00:14.025188    7088 out.go:177] * Starting "ha-573100-m02" control-plane node in "ha-573100" cluster
	I0407 13:00:14.029200    7088 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:00:14.029200    7088 cache.go:56] Caching tarball of preloaded images
	I0407 13:00:14.029200    7088 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 13:00:14.029200    7088 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 13:00:14.030187    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:00:14.035191    7088 start.go:360] acquireMachinesLock for ha-573100-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:00:14.035191    7088 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-573100-m02"
	I0407 13:00:14.035191    7088 start.go:93] Provisioning new machine with config: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:00:14.035191    7088 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0407 13:00:14.039191    7088 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 13:00:14.039191    7088 start.go:159] libmachine.API.Create for "ha-573100" (driver="hyperv")
	I0407 13:00:14.039191    7088 client.go:168] LocalClient.Create starting
	I0407 13:00:14.040199    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 13:00:14.040199    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 13:00:14.040199    7088 main.go:141] libmachine: Parsing certificate...
	I0407 13:00:14.040199    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 13:00:14.041188    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 13:00:14.041188    7088 main.go:141] libmachine: Parsing certificate...
	I0407 13:00:14.041188    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 13:00:15.940822    7088 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 13:00:15.940822    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:15.940822    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 13:00:17.658612    7088 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 13:00:17.659074    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:17.659074    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:00:19.113110    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:00:19.113358    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:19.113358    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:00:22.642467    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:00:22.643579    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:22.646188    7088 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:00:23.161325    7088 main.go:141] libmachine: Creating SSH key...
	I0407 13:00:23.243457    7088 main.go:141] libmachine: Creating VM...
	I0407 13:00:23.243457    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:00:26.189500    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:00:26.189500    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:26.189500    7088 main.go:141] libmachine: Using switch "Default Switch"
	I0407 13:00:26.189500    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:00:27.974558    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:00:27.974558    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:27.975403    7088 main.go:141] libmachine: Creating VHD
	I0407 13:00:27.975403    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 13:00:31.811567    7088 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3353EDDA-1498-4F5F-A6FB-869591EAB766
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 13:00:31.812463    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:31.812463    7088 main.go:141] libmachine: Writing magic tar header
	I0407 13:00:31.812463    7088 main.go:141] libmachine: Writing SSH key tar header
	I0407 13:00:31.829007    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 13:00:35.006191    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:35.006191    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:35.007169    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\disk.vhd' -SizeBytes 20000MB
	I0407 13:00:37.530661    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:37.530661    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:37.530661    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-573100-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0407 13:00:41.078585    7088 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-573100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 13:00:41.078585    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:41.079284    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-573100-m02 -DynamicMemoryEnabled $false
	I0407 13:00:43.307325    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:43.307753    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:43.307753    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-573100-m02 -Count 2
	I0407 13:00:45.458801    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:45.458801    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:45.459336    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-573100-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\boot2docker.iso'
	I0407 13:00:48.010252    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:48.010252    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:48.010669    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-573100-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\disk.vhd'
	I0407 13:00:50.701895    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:50.702086    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:50.702086    7088 main.go:141] libmachine: Starting VM...
	I0407 13:00:50.702086    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-573100-m02
	I0407 13:00:53.719116    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:53.719116    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:53.719116    7088 main.go:141] libmachine: Waiting for host to start...
	I0407 13:00:53.720116    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:00:56.023235    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:56.023440    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:56.023584    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:00:58.651694    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:58.651694    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:59.652693    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:02.080278    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:02.080278    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:02.080278    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:04.706413    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:01:04.706413    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:05.706546    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:07.863083    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:07.863083    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:07.864085    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:10.365875    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:01:10.365875    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:11.366046    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:13.542100    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:13.542334    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:13.542334    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:16.097723    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:01:16.097723    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:17.098182    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:19.341818    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:19.342094    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:19.342094    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:21.900537    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:21.900537    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:21.901200    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:24.007251    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:24.007510    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:24.007617    7088 machine.go:93] provisionDockerMachine start ...
	I0407 13:01:24.007928    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:26.164984    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:26.164984    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:26.164984    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:28.672921    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:28.672921    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:28.678826    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:28.680191    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:01:28.680191    7088 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:01:28.810399    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 13:01:28.810459    7088 buildroot.go:166] provisioning hostname "ha-573100-m02"
	I0407 13:01:28.810519    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:30.928993    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:30.929305    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:30.929305    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:33.512685    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:33.512685    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:33.518374    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:33.519096    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:01:33.519096    7088 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-573100-m02 && echo "ha-573100-m02" | sudo tee /etc/hostname
	I0407 13:01:33.668344    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-573100-m02
	
	I0407 13:01:33.668406    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:35.778565    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:35.778565    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:35.778641    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:38.274066    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:38.274066    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:38.280678    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:38.280778    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:01:38.280778    7088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-573100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-573100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-573100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:01:38.429919    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:01:38.429919    7088 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 13:01:38.429919    7088 buildroot.go:174] setting up certificates
	I0407 13:01:38.429919    7088 provision.go:84] configureAuth start
	I0407 13:01:38.429919    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:40.523263    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:40.523534    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:40.523534    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:42.986312    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:42.986312    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:42.987045    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:45.088779    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:45.089311    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:45.089408    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:47.563218    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:47.563218    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:47.563218    7088 provision.go:143] copyHostCerts
	I0407 13:01:47.563218    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 13:01:47.563218    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 13:01:47.563218    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 13:01:47.563218    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 13:01:47.563218    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 13:01:47.563218    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 13:01:47.563218    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 13:01:47.563218    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 13:01:47.563218    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 13:01:47.563218    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 13:01:47.563218    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 13:01:47.563218    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 13:01:47.568741    7088 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-573100-m02 san=[127.0.0.1 172.17.82.162 ha-573100-m02 localhost minikube]
	I0407 13:01:47.850562    7088 provision.go:177] copyRemoteCerts
	I0407 13:01:47.859512    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:01:47.860539    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:49.984246    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:49.984246    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:49.984246    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:52.492834    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:52.492834    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:52.493517    7088 sshutil.go:53] new ssh client: &{IP:172.17.82.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\id_rsa Username:docker}
	I0407 13:01:52.599357    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7398247s)
	I0407 13:01:52.599357    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 13:01:52.600433    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:01:52.658346    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 13:01:52.658888    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 13:01:52.704286    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 13:01:52.704836    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:01:52.746925    7088 provision.go:87] duration metric: took 14.3169455s to configureAuth
	I0407 13:01:52.746925    7088 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:01:52.747846    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:01:52.747896    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:54.827454    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:54.827668    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:54.827668    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:57.360113    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:57.360113    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:57.366106    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:57.367083    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:01:57.367233    7088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 13:01:57.498793    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 13:01:57.498866    7088 buildroot.go:70] root file system type: tmpfs
	I0407 13:01:57.499078    7088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 13:01:57.499154    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:59.590944    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:59.590944    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:59.591792    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:02.107965    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:02.107965    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:02.113686    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:02:02.114412    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:02:02.114412    7088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.95.223"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 13:02:02.259759    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.95.223
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 13:02:02.260479    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:04.331421    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:04.331421    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:04.331525    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:06.821254    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:06.821254    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:06.829042    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:02:06.829783    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:02:06.829783    7088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 13:02:08.981285    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 13:02:08.981285    7088 machine.go:96] duration metric: took 44.9733808s to provisionDockerMachine
	I0407 13:02:08.981285    7088 client.go:171] duration metric: took 1m54.9416106s to LocalClient.Create
	I0407 13:02:08.981285    7088 start.go:167] duration metric: took 1m54.9416106s to libmachine.API.Create "ha-573100"
	I0407 13:02:08.981285    7088 start.go:293] postStartSetup for "ha-573100-m02" (driver="hyperv")
	I0407 13:02:08.981285    7088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:02:08.995671    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:02:08.995671    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:11.078501    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:11.078773    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:11.078773    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:13.528362    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:13.528362    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:13.528362    7088 sshutil.go:53] new ssh client: &{IP:172.17.82.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\id_rsa Username:docker}
	I0407 13:02:13.632499    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6368075s)
	I0407 13:02:13.643947    7088 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:02:13.650705    7088 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:02:13.650705    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 13:02:13.650705    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 13:02:13.652373    7088 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 13:02:13.652414    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 13:02:13.663164    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:02:13.681766    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 13:02:13.732675    7088 start.go:296] duration metric: took 4.7513692s for postStartSetup
	I0407 13:02:13.735399    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:15.845304    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:15.845304    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:15.845304    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:18.335785    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:18.335785    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:18.336508    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:02:18.338927    7088 start.go:128] duration metric: took 2m4.3032123s to createHost
	I0407 13:02:18.339050    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:20.429546    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:20.430373    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:20.430373    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:22.928925    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:22.928999    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:22.934759    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:02:22.935274    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:02:22.935274    7088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:02:23.060239    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744030943.072476964
	
	I0407 13:02:23.060239    7088 fix.go:216] guest clock: 1744030943.072476964
	I0407 13:02:23.060320    7088 fix.go:229] Guest: 2025-04-07 13:02:23.072476964 +0000 UTC Remote: 2025-04-07 13:02:18.3389272 +0000 UTC m=+321.779734301 (delta=4.733549764s)
	I0407 13:02:23.060365    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:25.137366    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:25.137366    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:25.138358    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:27.682992    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:27.683821    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:27.689762    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:02:27.690468    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:02:27.690468    7088 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744030943
	I0407 13:02:27.832219    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 13:02:23 UTC 2025
	
	I0407 13:02:27.832280    7088 fix.go:236] clock set: Mon Apr  7 13:02:23 UTC 2025
	 (err=<nil>)
	I0407 13:02:27.832280    7088 start.go:83] releasing machines lock for "ha-573100-m02", held for 2m13.7965248s
	I0407 13:02:27.832576    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:29.931800    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:29.931800    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:29.932004    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:32.456589    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:32.456589    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:32.460486    7088 out.go:177] * Found network options:
	I0407 13:02:32.463067    7088 out.go:177]   - NO_PROXY=172.17.95.223
	W0407 13:02:32.466137    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 13:02:32.468572    7088 out.go:177]   - NO_PROXY=172.17.95.223
	W0407 13:02:32.471057    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:02:32.472088    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 13:02:32.473615    7088 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 13:02:32.474660    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:32.482717    7088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:02:32.482717    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:34.722583    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:34.722583    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:34.722583    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:34.722583    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:34.722832    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:34.722892    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:37.393218    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:37.394071    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:37.394284    7088 sshutil.go:53] new ssh client: &{IP:172.17.82.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\id_rsa Username:docker}
	I0407 13:02:37.421438    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:37.421438    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:37.421670    7088 sshutil.go:53] new ssh client: &{IP:172.17.82.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\id_rsa Username:docker}
	I0407 13:02:37.494550    7088 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0118105s)
	W0407 13:02:37.494631    7088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:02:37.505544    7088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:02:37.506448    7088 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0328109s)
	W0407 13:02:37.506448    7088 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 13:02:37.535745    7088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:02:37.535745    7088 start.go:495] detecting cgroup driver to use...
	I0407 13:02:37.535745    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:02:37.581687    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 13:02:37.613493    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:02:37.632362    7088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	W0407 13:02:37.643821    7088 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 13:02:37.643821    7088 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 13:02:37.645834    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:02:37.673979    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:02:37.704730    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:02:37.735391    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:02:37.765119    7088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:02:37.801669    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:02:37.834523    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 13:02:37.865589    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 13:02:37.896999    7088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:02:37.915906    7088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:02:37.927317    7088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:02:37.957524    7088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:02:37.983388    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:38.185390    7088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:02:38.214407    7088 start.go:495] detecting cgroup driver to use...
	I0407 13:02:38.228213    7088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 13:02:38.264017    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:02:38.297812    7088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:02:38.341407    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:02:38.377900    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:02:38.409708    7088 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 13:02:38.474246    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:02:38.502511    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:02:38.548886    7088 ssh_runner.go:195] Run: which cri-dockerd
	I0407 13:02:38.565281    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 13:02:38.581511    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 13:02:38.618480    7088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 13:02:38.800982    7088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 13:02:38.977656    7088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 13:02:38.977656    7088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 13:02:39.026140    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:39.219102    7088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 13:02:41.781927    7088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.562738s)
	I0407 13:02:41.794022    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 13:02:41.835006    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:02:41.868958    7088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 13:02:42.061443    7088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 13:02:42.261178    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:42.474095    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 13:02:42.512513    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:02:42.548716    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:42.730861    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 13:02:42.831136    7088 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 13:02:42.842948    7088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 13:02:42.850956    7088 start.go:563] Will wait 60s for crictl version
	I0407 13:02:42.862897    7088 ssh_runner.go:195] Run: which crictl
	I0407 13:02:42.878548    7088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:02:42.927796    7088 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 13:02:42.936981    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:02:42.980964    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:02:43.017931    7088 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 13:02:43.021933    7088 out.go:177]   - env NO_PROXY=172.17.95.223
	I0407 13:02:43.024394    7088 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 13:02:43.030312    7088 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 13:02:43.030881    7088 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 13:02:43.030881    7088 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 13:02:43.030881    7088 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 13:02:43.035088    7088 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 13:02:43.035088    7088 ip.go:214] interface addr: 172.17.80.1/20
	I0407 13:02:43.047708    7088 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 13:02:43.053805    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:02:43.076747    7088 mustload.go:65] Loading cluster: ha-573100
	I0407 13:02:43.077393    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:02:43.078082    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:02:45.190200    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:45.190588    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:45.190588    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:02:45.191355    7088 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100 for IP: 172.17.82.162
	I0407 13:02:45.191355    7088 certs.go:194] generating shared ca certs ...
	I0407 13:02:45.191426    7088 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:02:45.192021    7088 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 13:02:45.192494    7088 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 13:02:45.192706    7088 certs.go:256] generating profile certs ...
	I0407 13:02:45.193446    7088 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key
	I0407 13:02:45.193643    7088 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.46166831
	I0407 13:02:45.193808    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.46166831 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.95.223 172.17.82.162 172.17.95.254]
	I0407 13:02:45.371560    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.46166831 ...
	I0407 13:02:45.371560    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.46166831: {Name:mkc8e38912772193e71c7d2f229115814f2aefe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:02:45.373468    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.46166831 ...
	I0407 13:02:45.373468    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.46166831: {Name:mka4627968bd9ab0cbeec7ef9cb63578cf53bbb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:02:45.374511    7088 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.46166831 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt
	I0407 13:02:45.390526    7088 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.46166831 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key
	I0407 13:02:45.391613    7088 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key
	I0407 13:02:45.391613    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 13:02:45.392693    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 13:02:45.392693    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 13:02:45.392693    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 13:02:45.392693    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0407 13:02:45.393354    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0407 13:02:45.393608    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0407 13:02:45.393608    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0407 13:02:45.394599    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 13:02:45.394599    7088 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 13:02:45.394599    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 13:02:45.394599    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 13:02:45.396193    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 13:02:45.396567    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 13:02:45.396773    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 13:02:45.397368    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 13:02:45.397592    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 13:02:45.397678    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:02:45.397678    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:02:47.494322    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:47.495299    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:47.495299    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:50.016729    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:02:50.018174    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:50.018361    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:02:50.115976    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0407 13:02:50.125246    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0407 13:02:50.164141    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0407 13:02:50.170851    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0407 13:02:50.201863    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0407 13:02:50.207966    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0407 13:02:50.236577    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0407 13:02:50.244216    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0407 13:02:50.282436    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0407 13:02:50.289447    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0407 13:02:50.327954    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0407 13:02:50.334630    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0407 13:02:50.356061    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:02:50.402453    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:02:50.446425    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:02:50.498396    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 13:02:50.549701    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0407 13:02:50.596602    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:02:50.643195    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:02:50.688188    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:02:50.732546    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 13:02:50.775318    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 13:02:50.818123    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:02:50.861170    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0407 13:02:50.890212    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0407 13:02:50.920775    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0407 13:02:50.951535    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0407 13:02:50.981806    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0407 13:02:51.011580    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0407 13:02:51.041971    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0407 13:02:51.082788    7088 ssh_runner.go:195] Run: openssl version
	I0407 13:02:51.104816    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 13:02:51.135729    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 13:02:51.142422    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 13:02:51.152622    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 13:02:51.171339    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 13:02:51.204158    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 13:02:51.235753    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 13:02:51.242214    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 13:02:51.252721    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 13:02:51.270890    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:02:51.302111    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:02:51.333303    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:02:51.340876    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:02:51.353994    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:02:51.372326    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:02:51.407105    7088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:02:51.413660    7088 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:02:51.414009    7088 kubeadm.go:934] updating node {m02 172.17.82.162 8443 v1.32.2 docker true true} ...
	I0407 13:02:51.414230    7088 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-573100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.82.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:02:51.414230    7088 kube-vip.go:115] generating kube-vip config ...
	I0407 13:02:51.428789    7088 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0407 13:02:51.457118    7088 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0407 13:02:51.457207    7088 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0407 13:02:51.469254    7088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:02:51.485737    7088 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0407 13:02:51.498494    7088 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0407 13:02:51.523972    7088 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl
	I0407 13:02:51.523972    7088 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet
	I0407 13:02:51.523972    7088 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm
	I0407 13:02:52.560396    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 13:02:52.571412    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 13:02:52.581461    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0407 13:02:52.581985    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0407 13:02:52.724827    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 13:02:52.746817    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 13:02:52.755823    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0407 13:02:52.755823    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0407 13:02:52.909860    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:02:53.001347    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 13:02:53.016447    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 13:02:53.045051    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0407 13:02:53.045051    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0407 13:02:53.808058    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0407 13:02:53.824514    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0407 13:02:53.851717    7088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:02:53.879850    7088 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0407 13:02:53.920680    7088 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0407 13:02:53.929501    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:02:53.965723    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:54.162292    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:02:54.190163    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:02:54.191166    7088 start.go:317] joinCluster: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:def
ault APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:02:54.191166    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0407 13:02:54.191166    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:02:56.282443    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:56.282443    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:56.282443    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:58.868220    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:02:58.868991    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:58.869047    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:02:59.329516    7088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1382295s)
	I0407 13:02:59.329577    7088 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:02:59.329638    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ldyb3b.tts1mdzavw5rgovt --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-573100-m02 --control-plane --apiserver-advertise-address=172.17.82.162 --apiserver-bind-port=8443"
	I0407 13:03:37.630029    7088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ldyb3b.tts1mdzavw5rgovt --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-573100-m02 --control-plane --apiserver-advertise-address=172.17.82.162 --apiserver-bind-port=8443": (38.3001611s)
	I0407 13:03:37.630029    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0407 13:03:38.410967    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-573100-m02 minikube.k8s.io/updated_at=2025_04_07T13_03_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=ha-573100 minikube.k8s.io/primary=false
	I0407 13:03:38.626288    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-573100-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0407 13:03:38.778205    7088 start.go:319] duration metric: took 44.5868432s to joinCluster
	I0407 13:03:38.778448    7088 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:03:38.779315    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:03:38.780867    7088 out.go:177] * Verifying Kubernetes components...
	I0407 13:03:38.797710    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:03:39.191251    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:03:39.222139    7088 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 13:03:39.222745    7088 kapi.go:59] client config for ha-573100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0407 13:03:39.222893    7088 kubeadm.go:483] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.95.223:8443
	I0407 13:03:39.223770    7088 node_ready.go:35] waiting up to 6m0s for node "ha-573100-m02" to be "Ready" ...
	I0407 13:03:39.224220    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:39.224291    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:39.224291    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:39.224330    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:39.240495    7088 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0407 13:03:39.724673    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:39.724673    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:39.724673    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:39.724673    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:39.731344    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:40.224867    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:40.224867    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:40.224867    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:40.224867    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:40.235146    7088 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0407 13:03:40.724663    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:40.724663    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:40.724663    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:40.724663    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:40.731199    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:41.224903    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:41.224903    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:41.224903    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:41.224903    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:41.230455    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:41.230853    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:41.724984    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:41.724984    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:41.724984    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:41.724984    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:41.737965    7088 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0407 13:03:42.224373    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:42.224373    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:42.224373    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:42.224373    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:42.230850    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:42.724884    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:42.724884    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:42.724884    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:42.724884    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:42.730697    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:43.224496    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:43.224496    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:43.224496    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:43.224496    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:43.230220    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:43.724567    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:43.724567    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:43.724567    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:43.724567    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:43.840293    7088 round_trippers.go:581] Response Status: 200 OK in 114 milliseconds
	I0407 13:03:43.840293    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:44.225308    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:44.225308    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:44.225308    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:44.225308    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:44.231092    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:44.725053    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:44.725053    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:44.725053    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:44.725053    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:44.731159    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:45.224589    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:45.224589    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:45.224589    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:45.224589    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:45.234852    7088 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0407 13:03:45.724360    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:45.724360    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:45.724360    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:45.724360    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:45.731312    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:46.224857    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:46.224857    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:46.224857    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:46.224857    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:46.231270    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:46.231972    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:46.724530    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:46.724530    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:46.724530    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:46.724530    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:46.918652    7088 round_trippers.go:581] Response Status: 200 OK in 194 milliseconds
	I0407 13:03:47.224913    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:47.225032    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:47.225032    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:47.225032    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:47.230316    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:47.724558    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:47.724649    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:47.724649    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:47.724649    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:47.730407    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:48.224020    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:48.224020    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:48.224020    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:48.224020    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:48.228313    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:48.724215    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:48.724215    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:48.724215    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:48.724215    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:48.729646    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:48.730192    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:49.225511    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:49.225670    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:49.225670    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:49.225670    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:49.231240    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:49.724819    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:49.724819    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:49.724819    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:49.724819    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:49.730569    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:50.224903    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:50.224903    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:50.224977    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:50.224977    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:50.228992    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:50.724077    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:50.724077    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:50.724077    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:50.724077    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:50.730212    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:50.730745    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:51.225029    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:51.225029    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:51.225127    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:51.225127    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:51.230396    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:51.725125    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:51.725125    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:51.725125    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:51.725125    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:51.730490    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:52.225558    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:52.225558    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:52.225558    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:52.225558    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:52.230063    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:52.724627    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:52.724627    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:52.724627    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:52.724627    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:52.729627    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:53.224547    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:53.224547    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:53.224547    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:53.224547    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:53.230122    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:53.230337    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:53.724826    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:53.724826    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:53.724826    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:53.724826    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:53.730025    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:54.224833    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:54.224833    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:54.224833    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:54.224833    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:54.230051    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:54.724198    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:54.724198    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:54.724198    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:54.724198    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:54.729634    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:55.224107    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:55.224107    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:55.224107    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:55.224107    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:55.230028    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:55.230415    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:55.725497    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:55.725586    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:55.725586    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:55.725586    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:55.730261    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:56.224426    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:56.224426    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:56.224426    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:56.224426    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:56.230215    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:56.725186    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:56.725264    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:56.725291    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:56.725291    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:56.730536    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:57.224545    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:57.224620    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:57.224620    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:57.224620    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:57.229919    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:57.724634    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:57.724634    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:57.724634    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:57.724634    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:57.729921    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:57.729921    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:58.224392    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:58.224392    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:58.224392    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:58.224392    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:58.230185    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:58.725099    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:58.725099    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:58.725099    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:58.725099    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:58.729909    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:59.224233    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:59.224233    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:59.224233    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:59.224233    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:59.230005    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:59.724599    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:59.724599    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:59.724599    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:59.724599    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:59.730287    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:59.731290    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:04:00.224725    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:00.224725    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:00.224725    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:00.224725    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:00.230502    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:00.725482    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:00.725482    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:00.725482    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:00.725482    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:00.731546    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:01.224145    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:01.224145    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:01.224145    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:01.224145    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:01.229773    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:01.724606    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:01.724606    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:01.724606    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:01.724606    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:01.729180    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:02.224906    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:02.224991    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:02.225057    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:02.225057    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:02.229692    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:02.230939    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:04:02.724203    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:02.724203    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:02.724203    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:02.724203    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:02.730099    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:03.224698    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:03.224698    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.224698    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.224698    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.239038    7088 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0407 13:04:03.239436    7088 node_ready.go:49] node "ha-573100-m02" has status "Ready":"True"
	I0407 13:04:03.239492    7088 node_ready.go:38] duration metric: took 24.0154683s for node "ha-573100-m02" to be "Ready" ...
	I0407 13:04:03.239492    7088 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:04:03.239732    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:03.239732    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.239732    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.239795    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.252547    7088 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0407 13:04:03.255541    7088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.255541    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-whpg2
	I0407 13:04:03.255541    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.255541    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.255541    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.264986    7088 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 13:04:03.265331    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.265331    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.265331    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.265331    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.277287    7088 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0407 13:04:03.277669    7088 pod_ready.go:93] pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.277669    7088 pod_ready.go:82] duration metric: took 22.1285ms for pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.277805    7088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.277941    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-z4nkw
	I0407 13:04:03.277941    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.277941    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.277941    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.284688    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:03.284810    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.284810    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.284810    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.284810    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.302456    7088 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0407 13:04:03.303491    7088 pod_ready.go:93] pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.303563    7088 pod_ready.go:82] duration metric: took 25.7577ms for pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.303563    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.303693    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100
	I0407 13:04:03.303733    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.303733    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.303733    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.324056    7088 round_trippers.go:581] Response Status: 200 OK in 20 milliseconds
	I0407 13:04:03.324101    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.324101    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.324101    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.324101    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.327994    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:04:03.328764    7088 pod_ready.go:93] pod "etcd-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.329117    7088 pod_ready.go:82] duration metric: took 25.5545ms for pod "etcd-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.329259    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.329259    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100-m02
	I0407 13:04:03.329413    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.329413    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.329413    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.333484    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:04:03.333919    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:03.333971    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.333971    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.333971    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.338283    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:03.338837    7088 pod_ready.go:93] pod "etcd-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.339393    7088 pod_ready.go:82] duration metric: took 10.1349ms for pod "etcd-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.339393    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.425486    7088 request.go:661] Waited for 86.0924ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100
	I0407 13:04:03.425486    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100
	I0407 13:04:03.425486    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.425486    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.425486    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.430341    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:03.624998    7088 request.go:661] Waited for 193.0367ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.624998    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.624998    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.624998    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.624998    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.630110    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:03.630338    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.630338    7088 pod_ready.go:82] duration metric: took 290.9429ms for pod "kube-apiserver-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.630338    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.824909    7088 request.go:661] Waited for 194.5703ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m02
	I0407 13:04:03.825353    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m02
	I0407 13:04:03.825353    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.825353    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.825353    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.830917    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:04.025154    7088 request.go:661] Waited for 193.8529ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:04.025154    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:04.025610    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.025653    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.025653    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.029867    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:04:04.030168    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:04.030168    7088 pod_ready.go:82] duration metric: took 399.8289ms for pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.030168    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.225325    7088 request.go:661] Waited for 195.1555ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100
	I0407 13:04:04.225884    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100
	I0407 13:04:04.225923    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.225923    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.225985    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.230163    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:04.425438    7088 request.go:661] Waited for 195.0674ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:04.425438    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:04.425438    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.425438    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.425438    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.429802    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:04.429802    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:04.429802    7088 pod_ready.go:82] duration metric: took 399.6318ms for pod "kube-controller-manager-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.430340    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.625506    7088 request.go:661] Waited for 195.1039ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m02
	I0407 13:04:04.625506    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m02
	I0407 13:04:04.625506    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.625506    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.626069    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.630887    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:04.825571    7088 request.go:661] Waited for 194.2757ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:04.825864    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:04.826014    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.826068    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.826068    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.832425    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:04.832425    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:04.832962    7088 pod_ready.go:82] duration metric: took 402.6207ms for pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.832962    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sxkgm" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.025129    7088 request.go:661] Waited for 191.922ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxkgm
	I0407 13:04:05.025129    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxkgm
	I0407 13:04:05.025129    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.025129    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.025129    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.030525    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:05.224801    7088 request.go:661] Waited for 193.7488ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:05.225252    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:05.225252    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.225252    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.225252    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.229861    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:05.229861    7088 pod_ready.go:93] pod "kube-proxy-sxkgm" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:05.229861    7088 pod_ready.go:82] duration metric: took 396.8974ms for pod "kube-proxy-sxkgm" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.229861    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsgf7" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.424981    7088 request.go:661] Waited for 195.1194ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsgf7
	I0407 13:04:05.424981    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsgf7
	I0407 13:04:05.424981    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.424981    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.424981    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.431627    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:05.625673    7088 request.go:661] Waited for 193.5397ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:05.626176    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:05.626176    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.626176    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.626242    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.630567    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:05.630567    7088 pod_ready.go:93] pod "kube-proxy-xsgf7" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:05.630567    7088 pod_ready.go:82] duration metric: took 400.7039ms for pod "kube-proxy-xsgf7" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.630567    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.824913    7088 request.go:661] Waited for 194.3451ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100
	I0407 13:04:05.824913    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100
	I0407 13:04:05.824913    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.824913    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.824913    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.829608    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:06.025096    7088 request.go:661] Waited for 194.9629ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:06.025466    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:06.025466    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.025466    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.025466    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.035456    7088 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 13:04:06.036010    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:06.036052    7088 pod_ready.go:82] duration metric: took 405.4837ms for pod "kube-scheduler-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:06.036076    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:06.225413    7088 request.go:661] Waited for 189.336ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m02
	I0407 13:04:06.225810    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m02
	I0407 13:04:06.225810    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.225810    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.225810    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.231125    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:06.425918    7088 request.go:661] Waited for 194.3474ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:06.425918    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:06.425918    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.425918    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.425918    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.433161    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:04:06.434280    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:06.434280    7088 pod_ready.go:82] duration metric: took 398.202ms for pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:06.434280    7088 pod_ready.go:39] duration metric: took 3.1946823s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:04:06.434280    7088 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:04:06.446383    7088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:04:06.472714    7088 api_server.go:72] duration metric: took 27.6941438s to wait for apiserver process to appear ...
	I0407 13:04:06.472714    7088 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:04:06.472714    7088 api_server.go:253] Checking apiserver healthz at https://172.17.95.223:8443/healthz ...
	I0407 13:04:06.485038    7088 api_server.go:279] https://172.17.95.223:8443/healthz returned 200:
	ok
	I0407 13:04:06.485176    7088 round_trippers.go:470] GET https://172.17.95.223:8443/version
	I0407 13:04:06.485194    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.485194    7088 round_trippers.go:480]     Accept: application/json, */*
	I0407 13:04:06.485194    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.486947    7088 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 13:04:06.486947    7088 api_server.go:141] control plane version: v1.32.2
	I0407 13:04:06.486947    7088 api_server.go:131] duration metric: took 14.2338ms to wait for apiserver health ...
	I0407 13:04:06.486947    7088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:04:06.625358    7088 request.go:661] Waited for 137.8702ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:06.625358    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:06.625358    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.625358    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.625358    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.631889    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:06.634000    7088 system_pods.go:59] 17 kube-system pods found
	I0407 13:04:06.634000    7088 system_pods.go:61] "coredns-668d6bf9bc-whpg2" [48faa3ce-0f1f-4c88-8298-15960d3c75a7] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "coredns-668d6bf9bc-z4nkw" [4aa968e7-d945-4f70-932d-b42417702382] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "etcd-ha-573100" [c473d0ab-e66d-4b41-ad43-edce5e371027] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "etcd-ha-573100-m02" [0f05d56b-d0f5-4505-9d54-127111d30d27] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kindnet-fxxw5" [4fc9602a-d72f-4421-96a3-a7b0b35e2ce6] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kindnet-vhm9b" [355feff9-5819-4d85-82f0-2281fdcc5d5a] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-apiserver-ha-573100" [60830754-3b25-4753-9ec0-d9cef7b7b548] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-apiserver-ha-573100-m02" [5fa8bf0c-a2ff-4b0d-8e9f-a42172533517] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-controller-manager-ha-573100" [0c4d6f0d-d4ae-40cd-bfa7-b7f39dff081e] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-controller-manager-ha-573100-m02" [cb31520b-fa77-4ceb-a798-c45f10c87d10] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-proxy-sxkgm" [6e0a6f3f-a949-4b95-aaaa-d74c1a7e0efe] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-proxy-xsgf7" [1bccfdb6-28f7-4190-a5a1-9316cfdf215e] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-scheduler-ha-573100" [d46211dc-ab95-474b-abfc-218808a4d1aa] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-scheduler-ha-573100-m02" [1fd3b48a-ef70-4cce-b7d4-24b44331bfba] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-vip-ha-573100" [b8e24d1a-1309-482f-9734-99bcf4812448] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-vip-ha-573100-m02" [6e3ad003-a31a-49de-841f-2e21e31f094d] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "storage-provisioner" [8d89f971-c575-4089-b12b-823fe7524dc2] Running
	I0407 13:04:06.634000    7088 system_pods.go:74] duration metric: took 147.052ms to wait for pod list to return data ...
	I0407 13:04:06.634000    7088 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:04:06.825527    7088 request.go:661] Waited for 191.5263ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/default/serviceaccounts
	I0407 13:04:06.826009    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/default/serviceaccounts
	I0407 13:04:06.826090    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.826090    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.826090    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.830792    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:06.831512    7088 default_sa.go:45] found service account: "default"
	I0407 13:04:06.831546    7088 default_sa.go:55] duration metric: took 197.5448ms for default service account to be created ...
	I0407 13:04:06.831602    7088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:04:07.025173    7088 request.go:661] Waited for 193.5214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:07.025173    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:07.025639    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:07.025639    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:07.025639    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:07.031498    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:07.034304    7088 system_pods.go:86] 17 kube-system pods found
	I0407 13:04:07.034347    7088 system_pods.go:89] "coredns-668d6bf9bc-whpg2" [48faa3ce-0f1f-4c88-8298-15960d3c75a7] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "coredns-668d6bf9bc-z4nkw" [4aa968e7-d945-4f70-932d-b42417702382] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "etcd-ha-573100" [c473d0ab-e66d-4b41-ad43-edce5e371027] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "etcd-ha-573100-m02" [0f05d56b-d0f5-4505-9d54-127111d30d27] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kindnet-fxxw5" [4fc9602a-d72f-4421-96a3-a7b0b35e2ce6] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kindnet-vhm9b" [355feff9-5819-4d85-82f0-2281fdcc5d5a] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kube-apiserver-ha-573100" [60830754-3b25-4753-9ec0-d9cef7b7b548] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kube-apiserver-ha-573100-m02" [5fa8bf0c-a2ff-4b0d-8e9f-a42172533517] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kube-controller-manager-ha-573100" [0c4d6f0d-d4ae-40cd-bfa7-b7f39dff081e] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-controller-manager-ha-573100-m02" [cb31520b-fa77-4ceb-a798-c45f10c87d10] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-proxy-sxkgm" [6e0a6f3f-a949-4b95-aaaa-d74c1a7e0efe] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-proxy-xsgf7" [1bccfdb6-28f7-4190-a5a1-9316cfdf215e] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-scheduler-ha-573100" [d46211dc-ab95-474b-abfc-218808a4d1aa] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-scheduler-ha-573100-m02" [1fd3b48a-ef70-4cce-b7d4-24b44331bfba] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-vip-ha-573100" [b8e24d1a-1309-482f-9734-99bcf4812448] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-vip-ha-573100-m02" [6e3ad003-a31a-49de-841f-2e21e31f094d] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "storage-provisioner" [8d89f971-c575-4089-b12b-823fe7524dc2] Running
	I0407 13:04:07.034586    7088 system_pods.go:126] duration metric: took 202.9824ms to wait for k8s-apps to be running ...
	I0407 13:04:07.034712    7088 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:04:07.046776    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:04:07.069495    7088 system_svc.go:56] duration metric: took 34.9092ms WaitForService to wait for kubelet
	I0407 13:04:07.069495    7088 kubeadm.go:582] duration metric: took 28.2909227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:04:07.070506    7088 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:04:07.225437    7088 request.go:661] Waited for 154.931ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes
	I0407 13:04:07.225437    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes
	I0407 13:04:07.225437    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:07.225437    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:07.225437    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:07.231866    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:07.232462    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:04:07.232515    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:04:07.232594    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:04:07.232594    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:04:07.232594    7088 node_conditions.go:105] duration metric: took 162.0875ms to run NodePressure ...
	I0407 13:04:07.232594    7088 start.go:241] waiting for startup goroutines ...
	I0407 13:04:07.232594    7088 start.go:255] writing updated cluster config ...
	I0407 13:04:07.238145    7088 out.go:201] 
	I0407 13:04:07.258378    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:04:07.258600    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:04:07.263992    7088 out.go:177] * Starting "ha-573100-m03" control-plane node in "ha-573100" cluster
	I0407 13:04:07.267324    7088 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:04:07.267324    7088 cache.go:56] Caching tarball of preloaded images
	I0407 13:04:07.268372    7088 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 13:04:07.268372    7088 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 13:04:07.268372    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:04:07.276648    7088 start.go:360] acquireMachinesLock for ha-573100-m03: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:04:07.276648    7088 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-573100-m03"
	I0407 13:04:07.276648    7088 start.go:93] Provisioning new machine with config: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:04:07.276648    7088 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0407 13:04:07.282944    7088 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 13:04:07.282944    7088 start.go:159] libmachine.API.Create for "ha-573100" (driver="hyperv")
	I0407 13:04:07.283589    7088 client.go:168] LocalClient.Create starting
	I0407 13:04:07.283822    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 13:04:07.284488    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 13:04:07.284488    7088 main.go:141] libmachine: Parsing certificate...
	I0407 13:04:07.284787    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 13:04:07.284972    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 13:04:07.284972    7088 main.go:141] libmachine: Parsing certificate...
	I0407 13:04:07.284972    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 13:04:09.160636    7088 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 13:04:09.160866    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:09.160866    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 13:04:10.847787    7088 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 13:04:10.847787    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:10.848058    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:04:12.338268    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:04:12.338268    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:12.338603    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:04:16.045163    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:04:16.045163    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:16.047527    7088 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:04:16.543823    7088 main.go:141] libmachine: Creating SSH key...
	I0407 13:04:17.031475    7088 main.go:141] libmachine: Creating VM...
	I0407 13:04:17.032557    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:04:19.904079    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:04:19.904204    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:19.904388    7088 main.go:141] libmachine: Using switch "Default Switch"
	I0407 13:04:19.904449    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:04:21.661704    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:04:21.661704    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:21.662184    7088 main.go:141] libmachine: Creating VHD
	I0407 13:04:21.662184    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 13:04:25.491148    7088 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1AF31488-5A7D-43FF-A7AF-C656F6973173
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 13:04:25.491900    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:25.491900    7088 main.go:141] libmachine: Writing magic tar header
	I0407 13:04:25.491900    7088 main.go:141] libmachine: Writing SSH key tar header
	I0407 13:04:25.505275    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 13:04:28.727771    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:28.727771    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:28.728475    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\disk.vhd' -SizeBytes 20000MB
	I0407 13:04:31.318240    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:31.318698    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:31.318698    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-573100-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0407 13:04:35.051272    7088 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-573100-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 13:04:35.051272    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:35.052145    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-573100-m03 -DynamicMemoryEnabled $false
	I0407 13:04:37.359544    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:37.359720    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:37.359720    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-573100-m03 -Count 2
	I0407 13:04:39.640505    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:39.640612    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:39.640612    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-573100-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\boot2docker.iso'
	I0407 13:04:42.267283    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:42.267283    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:42.267283    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-573100-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\disk.vhd'
	I0407 13:04:44.963564    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:44.964304    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:44.964304    7088 main.go:141] libmachine: Starting VM...
	I0407 13:04:44.964368    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-573100-m03
	I0407 13:04:48.147527    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:48.147527    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:48.147527    7088 main.go:141] libmachine: Waiting for host to start...
	I0407 13:04:48.147621    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:04:50.469306    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:04:50.469306    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:50.469736    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:04:53.004289    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:53.004419    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:54.004847    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:04:56.282109    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:04:56.282199    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:56.282199    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:04:58.842581    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:58.842581    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:59.842758    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:02.101253    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:02.101802    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:02.101802    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:04.622860    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:05:04.623874    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:05.625093    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:07.839447    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:07.840154    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:07.840154    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:10.439212    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:05:10.440169    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:11.441232    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:13.706866    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:13.707743    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:13.707743    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:16.368666    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:16.368666    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:16.368666    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:18.488237    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:18.488237    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:18.488237    7088 machine.go:93] provisionDockerMachine start ...
	I0407 13:05:18.488237    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:20.654388    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:20.654456    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:20.654456    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:23.210635    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:23.211288    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:23.219661    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:23.236391    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:23.236391    7088 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:05:23.373662    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 13:05:23.373662    7088 buildroot.go:166] provisioning hostname "ha-573100-m03"
	I0407 13:05:23.373662    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:25.511025    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:25.511666    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:25.511666    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:28.070653    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:28.070653    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:28.077488    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:28.078079    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:28.078160    7088 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-573100-m03 && echo "ha-573100-m03" | sudo tee /etc/hostname
	I0407 13:05:28.254477    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-573100-m03
	
	I0407 13:05:28.254477    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:30.469936    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:30.470295    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:30.470295    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:33.060519    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:33.060519    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:33.067199    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:33.067259    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:33.067259    7088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-573100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-573100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-573100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:05:33.225986    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:05:33.225986    7088 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 13:05:33.225986    7088 buildroot.go:174] setting up certificates
	I0407 13:05:33.226515    7088 provision.go:84] configureAuth start
	I0407 13:05:33.226616    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:35.396565    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:35.397012    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:35.397012    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:37.989220    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:37.989220    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:37.989452    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:40.159704    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:40.159802    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:40.159865    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:42.743428    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:42.743945    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:42.744021    7088 provision.go:143] copyHostCerts
	I0407 13:05:42.744021    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 13:05:42.744021    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 13:05:42.744021    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 13:05:42.744782    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 13:05:42.745467    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 13:05:42.746245    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 13:05:42.746290    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 13:05:42.746290    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 13:05:42.747574    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 13:05:42.747574    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 13:05:42.747574    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 13:05:42.747574    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 13:05:42.749383    7088 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-573100-m03 san=[127.0.0.1 172.17.94.27 ha-573100-m03 localhost minikube]
	I0407 13:05:42.859521    7088 provision.go:177] copyRemoteCerts
	I0407 13:05:42.869470    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:05:42.869470    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:45.016319    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:45.016319    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:45.016401    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:47.571174    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:47.572152    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:47.572395    7088 sshutil.go:53] new ssh client: &{IP:172.17.94.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\id_rsa Username:docker}
	I0407 13:05:47.677174    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8076259s)
	I0407 13:05:47.677230    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 13:05:47.677524    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:05:47.724728    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 13:05:47.725205    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 13:05:47.768523    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 13:05:47.769032    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:05:47.818533    7088 provision.go:87] duration metric: took 14.5919517s to configureAuth
	I0407 13:05:47.818593    7088 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:05:47.819211    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:05:47.819379    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:50.018080    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:50.018588    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:50.018588    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:52.559397    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:52.559681    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:52.564228    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:52.564881    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:52.564881    7088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 13:05:52.691946    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 13:05:52.692084    7088 buildroot.go:70] root file system type: tmpfs
	I0407 13:05:52.692251    7088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 13:05:52.692340    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:54.831347    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:54.832407    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:54.832463    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:57.390838    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:57.391081    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:57.396323    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:57.396922    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:57.396922    7088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.95.223"
	Environment="NO_PROXY=172.17.95.223,172.17.82.162"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 13:05:57.545430    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.95.223
	Environment=NO_PROXY=172.17.95.223,172.17.82.162
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 13:05:57.545430    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:59.684717    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:59.684717    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:59.685168    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:02.302488    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:02.302488    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:02.309086    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:06:02.309698    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:06:02.309698    7088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 13:06:04.544559    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 13:06:04.544559    7088 machine.go:96] duration metric: took 46.0561148s to provisionDockerMachine
	I0407 13:06:04.544559    7088 client.go:171] duration metric: took 1m57.2604433s to LocalClient.Create
	I0407 13:06:04.544559    7088 start.go:167] duration metric: took 1m57.2610887s to libmachine.API.Create "ha-573100"
	I0407 13:06:04.544559    7088 start.go:293] postStartSetup for "ha-573100-m03" (driver="hyperv")
	I0407 13:06:04.544853    7088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:06:04.556145    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:06:04.556145    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:06.714337    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:06.714337    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:06.714581    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:09.257215    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:09.257215    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:09.257215    7088 sshutil.go:53] new ssh client: &{IP:172.17.94.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\id_rsa Username:docker}
	I0407 13:06:09.369817    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8136498s)
	I0407 13:06:09.380938    7088 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:06:09.388496    7088 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:06:09.388496    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 13:06:09.389220    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 13:06:09.390182    7088 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 13:06:09.390182    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 13:06:09.401303    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:06:09.418575    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 13:06:09.464779    7088 start.go:296] duration metric: took 4.9199041s for postStartSetup
	I0407 13:06:09.468301    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:11.627432    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:11.627752    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:11.627752    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:14.181255    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:14.181255    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:14.181893    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:06:14.184106    7088 start.go:128] duration metric: took 2m6.9068881s to createHost
	I0407 13:06:14.184106    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:16.417020    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:16.417020    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:16.417613    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:18.982886    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:18.982886    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:18.988627    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:06:18.989402    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:06:18.989402    7088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:06:19.122806    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744031179.138002811
	
	I0407 13:06:19.122806    7088 fix.go:216] guest clock: 1744031179.138002811
	I0407 13:06:19.122806    7088 fix.go:229] Guest: 2025-04-07 13:06:19.138002811 +0000 UTC Remote: 2025-04-07 13:06:14.1841065 +0000 UTC m=+557.623865201 (delta=4.953896311s)
	I0407 13:06:19.122806    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:21.273857    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:21.273857    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:21.273857    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:23.840987    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:23.840987    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:23.846844    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:06:23.847531    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:06:23.847601    7088 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744031179
	I0407 13:06:23.994379    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 13:06:19 UTC 2025
	
	I0407 13:06:23.994379    7088 fix.go:236] clock set: Mon Apr  7 13:06:19 UTC 2025
	 (err=<nil>)
	I0407 13:06:23.994379    7088 start.go:83] releasing machines lock for "ha-573100-m03", held for 2m16.7171165s
	I0407 13:06:23.994379    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:26.166458    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:26.167520    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:26.167552    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:28.758265    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:28.758265    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:28.761954    7088 out.go:177] * Found network options:
	I0407 13:06:28.765600    7088 out.go:177]   - NO_PROXY=172.17.95.223,172.17.82.162
	W0407 13:06:28.768383    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:06:28.768383    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 13:06:28.770330    7088 out.go:177]   - NO_PROXY=172.17.95.223,172.17.82.162
	W0407 13:06:28.774296    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:06:28.774296    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:06:28.775907    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:06:28.776090    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 13:06:28.778094    7088 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 13:06:28.778619    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:28.791182    7088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:06:28.791182    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:31.060425    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:31.061355    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:31.061355    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:31.081548    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:31.081548    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:31.082206    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:33.884512    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:33.884774    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:33.884928    7088 sshutil.go:53] new ssh client: &{IP:172.17.94.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\id_rsa Username:docker}
	I0407 13:06:33.910632    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:33.910968    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:33.911178    7088 sshutil.go:53] new ssh client: &{IP:172.17.94.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\id_rsa Username:docker}
	I0407 13:06:33.975176    7088 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1969433s)
	W0407 13:06:33.975286    7088 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 13:06:34.010779    7088 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2194683s)
	W0407 13:06:34.010779    7088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:06:34.022814    7088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:06:34.062056    7088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:06:34.062135    7088 start.go:495] detecting cgroup driver to use...
	I0407 13:06:34.062371    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0407 13:06:34.072679    7088 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 13:06:34.072679    7088 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 13:06:34.114289    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 13:06:34.146301    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:06:34.166314    7088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 13:06:34.176820    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:06:34.210413    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:06:34.241373    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:06:34.271361    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:06:34.307544    7088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:06:34.337585    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:06:34.373277    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 13:06:34.407770    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 13:06:34.440791    7088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:06:34.458779    7088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:06:34.469773    7088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:06:34.514226    7088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:06:34.543526    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:34.744631    7088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:06:34.776921    7088 start.go:495] detecting cgroup driver to use...
	I0407 13:06:34.788915    7088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 13:06:34.822330    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:06:34.856625    7088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:06:34.899766    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:06:34.935659    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:06:34.971095    7088 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 13:06:35.040477    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:06:35.066651    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:06:35.111501    7088 ssh_runner.go:195] Run: which cri-dockerd
	I0407 13:06:35.128878    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 13:06:35.145443    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 13:06:35.192232    7088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 13:06:35.399590    7088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 13:06:35.595188    7088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 13:06:35.595295    7088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 13:06:35.639760    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:35.828328    7088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 13:06:38.443523    7088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6150678s)
	I0407 13:06:38.455396    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 13:06:38.489967    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:06:38.536741    7088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 13:06:38.724195    7088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 13:06:38.906819    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:39.091868    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 13:06:39.133123    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:06:39.172505    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:39.369060    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 13:06:39.477345    7088 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 13:06:39.490031    7088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 13:06:39.498975    7088 start.go:563] Will wait 60s for crictl version
	I0407 13:06:39.511798    7088 ssh_runner.go:195] Run: which crictl
	I0407 13:06:39.529962    7088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:06:39.586030    7088 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 13:06:39.596402    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:06:39.637691    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:06:39.675255    7088 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 13:06:39.678475    7088 out.go:177]   - env NO_PROXY=172.17.95.223
	I0407 13:06:39.680944    7088 out.go:177]   - env NO_PROXY=172.17.95.223,172.17.82.162
	I0407 13:06:39.684119    7088 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 13:06:39.689610    7088 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 13:06:39.689610    7088 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 13:06:39.689610    7088 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 13:06:39.689610    7088 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 13:06:39.693526    7088 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 13:06:39.693526    7088 ip.go:214] interface addr: 172.17.80.1/20
	I0407 13:06:39.708269    7088 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 13:06:39.713938    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:06:39.738996    7088 mustload.go:65] Loading cluster: ha-573100
	I0407 13:06:39.739841    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:06:39.740600    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:06:41.904483    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:41.904830    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:41.904830    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:06:41.905516    7088 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100 for IP: 172.17.94.27
	I0407 13:06:41.905516    7088 certs.go:194] generating shared ca certs ...
	I0407 13:06:41.905574    7088 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:06:41.906311    7088 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 13:06:41.906620    7088 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 13:06:41.906964    7088 certs.go:256] generating profile certs ...
	I0407 13:06:41.907511    7088 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key
	I0407 13:06:41.907687    7088 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.034140ef
	I0407 13:06:41.907732    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.034140ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.95.223 172.17.82.162 172.17.94.27 172.17.95.254]
	I0407 13:06:42.163160    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.034140ef ...
	I0407 13:06:42.163160    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.034140ef: {Name:mkcb32ba08db63a84c65679bc81879233c0f3f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:06:42.164281    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.034140ef ...
	I0407 13:06:42.164281    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.034140ef: {Name:mk5617a33f3125826c920bd0ef10e498536f2e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:06:42.165282    7088 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.034140ef -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt
	I0407 13:06:42.182405    7088 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.034140ef -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key
	I0407 13:06:42.184004    7088 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key
	I0407 13:06:42.184004    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 13:06:42.184078    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 13:06:42.184078    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 13:06:42.184078    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 13:06:42.184078    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0407 13:06:42.184657    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0407 13:06:42.185234    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0407 13:06:42.185525    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0407 13:06:42.186038    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 13:06:42.186346    7088 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 13:06:42.186423    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 13:06:42.186654    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 13:06:42.186863    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 13:06:42.186863    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 13:06:42.187556    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 13:06:42.187556    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:06:42.187556    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 13:06:42.187556    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 13:06:42.188236    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:06:44.416794    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:44.417138    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:44.417138    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:46.969951    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:06:46.970682    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:46.970862    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:06:47.070247    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0407 13:06:47.077903    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0407 13:06:47.114307    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0407 13:06:47.121213    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0407 13:06:47.151795    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0407 13:06:47.159244    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0407 13:06:47.197401    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0407 13:06:47.208344    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0407 13:06:47.236743    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0407 13:06:47.244538    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0407 13:06:47.272970    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0407 13:06:47.283740    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0407 13:06:47.303796    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:06:47.349171    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:06:47.391586    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:06:47.434809    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 13:06:47.480017    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0407 13:06:47.524566    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:06:47.569494    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:06:47.614761    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:06:47.663490    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:06:47.712791    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 13:06:47.755113    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 13:06:47.798492    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0407 13:06:47.830431    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0407 13:06:47.861393    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0407 13:06:47.892099    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0407 13:06:47.922922    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0407 13:06:47.954629    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0407 13:06:47.987178    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0407 13:06:48.030307    7088 ssh_runner.go:195] Run: openssl version
	I0407 13:06:48.056770    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 13:06:48.093086    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 13:06:48.102031    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 13:06:48.112870    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 13:06:48.133417    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:06:48.163296    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:06:48.195562    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:06:48.202950    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:06:48.213785    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:06:48.234478    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:06:48.264378    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 13:06:48.292649    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 13:06:48.300010    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 13:06:48.310632    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 13:06:48.330556    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 13:06:48.360043    7088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:06:48.366461    7088 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:06:48.366713    7088 kubeadm.go:934] updating node {m03 172.17.94.27 8443 v1.32.2 docker true true} ...
	I0407 13:06:48.366918    7088 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-573100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.94.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:06:48.366970    7088 kube-vip.go:115] generating kube-vip config ...
	I0407 13:06:48.377358    7088 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0407 13:06:48.402103    7088 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0407 13:06:48.402307    7088 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0407 13:06:48.415366    7088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:06:48.432165    7088 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0407 13:06:48.442773    7088 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0407 13:06:48.463213    7088 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0407 13:06:48.463287    7088 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0407 13:06:48.463407    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 13:06:48.463407    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 13:06:48.463521    7088 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0407 13:06:48.477164    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:06:48.477591    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 13:06:48.483196    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 13:06:48.500225    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0407 13:06:48.500225    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0407 13:06:48.500225    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 13:06:48.500225    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0407 13:06:48.500225    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0407 13:06:48.510901    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 13:06:48.562416    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0407 13:06:48.562760    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0407 13:06:49.794804    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0407 13:06:49.814094    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0407 13:06:49.857522    7088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:06:49.897176    7088 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0407 13:06:49.939080    7088 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0407 13:06:49.946559    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:06:49.977295    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:50.194251    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:06:50.221374    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:06:50.222248    7088 start.go:317] joinCluster: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:def
ault APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.94.27 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:06:50.222536    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0407 13:06:50.222626    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:06:52.404003    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:52.404003    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:52.404086    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:54.996292    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:06:54.996292    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:54.997153    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:06:55.206617    7088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9839865s)
	I0407 13:06:55.206735    7088 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.94.27 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:06:55.206879    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i82xup.3c4guti3nmbehjm7 --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-573100-m03 --control-plane --apiserver-advertise-address=172.17.94.27 --apiserver-bind-port=8443"
	I0407 13:07:37.435816    7088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i82xup.3c4guti3nmbehjm7 --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-573100-m03 --control-plane --apiserver-advertise-address=172.17.94.27 --apiserver-bind-port=8443": (42.2286799s)
	I0407 13:07:37.435877    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0407 13:07:38.163288    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-573100-m03 minikube.k8s.io/updated_at=2025_04_07T13_07_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=ha-573100 minikube.k8s.io/primary=false
	I0407 13:07:38.354137    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-573100-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0407 13:07:38.552775    7088 start.go:319] duration metric: took 48.3303045s to joinCluster
	I0407 13:07:38.553084    7088 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.17.94.27 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:07:38.554309    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:07:38.555957    7088 out.go:177] * Verifying Kubernetes components...
	I0407 13:07:38.572183    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:07:38.957903    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:07:39.001876    7088 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 13:07:39.002052    7088 kapi.go:59] client config for ha-573100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0407 13:07:39.002588    7088 kubeadm.go:483] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.95.223:8443
	I0407 13:07:39.003766    7088 node_ready.go:35] waiting up to 6m0s for node "ha-573100-m03" to be "Ready" ...
	I0407 13:07:39.003766    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:39.003766    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:39.003766    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:39.003766    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:39.018546    7088 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0407 13:07:39.504540    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:39.504540    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:39.504540    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:39.504540    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:39.510541    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:40.004559    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:40.004559    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:40.004559    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:40.004559    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:40.010572    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:40.504774    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:40.504774    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:40.504774    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:40.504774    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:40.511951    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:07:41.006640    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:41.006697    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:41.006697    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:41.006697    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:41.012117    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:41.012490    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:41.505069    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:41.505069    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:41.505069    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:41.505069    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:41.511437    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:42.004117    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:42.004485    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:42.004485    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:42.004485    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:42.008774    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:42.504831    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:42.504952    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:42.504952    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:42.504952    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:42.846676    7088 round_trippers.go:581] Response Status: 200 OK in 341 milliseconds
	I0407 13:07:43.004405    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:43.004405    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:43.004405    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:43.004405    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:43.138046    7088 round_trippers.go:581] Response Status: 200 OK in 133 milliseconds
	I0407 13:07:43.138620    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:43.504325    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:43.504325    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:43.504325    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:43.504325    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:43.509322    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:44.004474    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:44.004474    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:44.004474    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:44.004474    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:44.010492    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:44.504721    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:44.504721    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:44.504793    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:44.504793    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:44.509203    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:45.004678    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:45.004678    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:45.004678    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:45.004678    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:45.082222    7088 round_trippers.go:581] Response Status: 200 OK in 77 milliseconds
	I0407 13:07:45.504719    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:45.504719    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:45.504719    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:45.504719    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:45.531624    7088 round_trippers.go:581] Response Status: 200 OK in 26 milliseconds
	I0407 13:07:45.532650    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:46.004139    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:46.004139    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:46.004139    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:46.004139    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:46.008764    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:46.504714    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:46.504714    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:46.504714    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:46.504714    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:46.510170    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:47.004601    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:47.004601    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:47.004668    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:47.004668    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:47.024926    7088 round_trippers.go:581] Response Status: 200 OK in 20 milliseconds
	I0407 13:07:47.504741    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:47.504741    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:47.504741    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:47.504741    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:47.513942    7088 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 13:07:48.005138    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:48.005138    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:48.005138    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:48.005138    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:48.010001    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:48.010374    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:48.504362    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:48.504362    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:48.504362    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:48.504362    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:48.510542    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:49.003962    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:49.003962    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:49.003962    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:49.003962    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:49.009521    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:49.504567    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:49.504567    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:49.504567    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:49.504567    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:49.510810    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:50.004106    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:50.004106    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:50.004106    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:50.004106    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:50.009146    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:50.504256    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:50.504673    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:50.504673    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:50.504673    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:50.510312    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:50.510647    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:51.004934    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:51.004934    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:51.004934    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:51.004934    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:51.010349    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:51.504459    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:51.504503    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:51.504503    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:51.504503    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:51.509822    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:52.004031    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:52.004031    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:52.004508    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:52.004508    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:52.008186    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:07:52.506149    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:52.506260    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:52.506260    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:52.506366    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:52.512083    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:52.512083    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:53.004261    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:53.004261    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:53.004261    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:53.004261    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:53.009813    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:53.505000    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:53.505000    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:53.505000    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:53.505000    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:53.510588    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:54.004001    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:54.004001    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:54.004001    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:54.004001    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:54.009257    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:54.504566    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:54.504989    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:54.504989    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:54.505066    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:54.510324    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:55.005257    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:55.005257    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:55.005257    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:55.005257    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:55.011064    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:55.011064    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:55.505042    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:55.505500    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:55.505500    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:55.505500    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:55.511088    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:56.004432    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:56.004432    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:56.004432    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:56.004432    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:56.017908    7088 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0407 13:07:56.505190    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:56.505277    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:56.505277    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:56.505277    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:56.512696    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:07:57.004187    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:57.004187    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:57.004187    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:57.004187    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:57.009848    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:57.504636    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:57.504717    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:57.504717    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:57.504717    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:57.509663    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:57.510664    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:58.004943    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:58.004943    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:58.004943    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:58.004943    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:58.008962    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:58.504445    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:58.504445    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:58.504445    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:58.504445    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:58.509756    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:59.005536    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:59.005536    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:59.005536    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:59.005536    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:59.011371    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:59.504688    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:59.504688    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:59.504688    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:59.504688    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:59.510911    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:59.511085    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:08:00.005137    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:00.005137    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.005137    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.005137    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.010134    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:00.011340    7088 node_ready.go:49] node "ha-573100-m03" has status "Ready":"True"
	I0407 13:08:00.011397    7088 node_ready.go:38] duration metric: took 21.0075341s for node "ha-573100-m03" to be "Ready" ...
	I0407 13:08:00.011480    7088 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:08:00.011655    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:00.011655    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.011655    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.011655    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.017282    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:00.020639    7088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.020729    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-whpg2
	I0407 13:08:00.020729    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.020729    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.020880    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.028218    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:00.029226    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.029226    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.029226    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.029226    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.034234    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:00.035256    7088 pod_ready.go:93] pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.035256    7088 pod_ready.go:82] duration metric: took 14.6173ms for pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.035357    7088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.035357    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-z4nkw
	I0407 13:08:00.035357    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.035357    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.035357    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.043391    7088 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0407 13:08:00.043948    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.044011    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.044011    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.044011    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.047769    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:08:00.047769    7088 pod_ready.go:93] pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.047769    7088 pod_ready.go:82] duration metric: took 12.4122ms for pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.047769    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.048389    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100
	I0407 13:08:00.048389    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.048389    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.048496    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.069798    7088 round_trippers.go:581] Response Status: 200 OK in 21 milliseconds
	I0407 13:08:00.070306    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.070306    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.070306    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.070306    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.073901    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:08:00.075287    7088 pod_ready.go:93] pod "etcd-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.075350    7088 pod_ready.go:82] duration metric: took 27.0537ms for pod "etcd-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.075350    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.075476    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100-m02
	I0407 13:08:00.075507    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.075507    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.075507    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.078895    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:08:00.078895    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:00.078895    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.078895    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.078895    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.085307    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:00.085528    7088 pod_ready.go:93] pod "etcd-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.085623    7088 pod_ready.go:82] duration metric: took 10.2735ms for pod "etcd-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.085623    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.206212    7088 request.go:661] Waited for 120.4521ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100-m03
	I0407 13:08:00.206504    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100-m03
	I0407 13:08:00.206711    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.206711    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.206792    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.211626    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:00.405419    7088 request.go:661] Waited for 193.2891ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:00.405834    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:00.405868    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.405868    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.405868    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.413631    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:00.415874    7088 pod_ready.go:93] pod "etcd-ha-573100-m03" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.415874    7088 pod_ready.go:82] duration metric: took 330.2489ms for pod "etcd-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.415874    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.606483    7088 request.go:661] Waited for 190.6078ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100
	I0407 13:08:00.606483    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100
	I0407 13:08:00.606483    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.606483    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.606483    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.611856    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:00.805396    7088 request.go:661] Waited for 192.7066ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.805396    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.805396    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.805396    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.805396    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.810355    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:00.810534    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.810534    7088 pod_ready.go:82] duration metric: took 394.6588ms for pod "kube-apiserver-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.810534    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.006505    7088 request.go:661] Waited for 195.9699ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m02
	I0407 13:08:01.006819    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m02
	I0407 13:08:01.006819    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.006819    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.006819    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.012204    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:01.205505    7088 request.go:661] Waited for 192.4647ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:01.205869    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:01.205869    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.205869    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.205869    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.210940    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:01.211378    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:01.211438    7088 pod_ready.go:82] duration metric: took 400.9019ms for pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.211438    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.405236    7088 request.go:661] Waited for 193.5187ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m03
	I0407 13:08:01.405236    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m03
	I0407 13:08:01.405236    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.405236    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.405236    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.411261    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:01.605662    7088 request.go:661] Waited for 193.3661ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:01.605662    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:01.605662    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.605662    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.605662    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.611067    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:01.611134    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100-m03" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:01.611134    7088 pod_ready.go:82] duration metric: took 399.6941ms for pod "kube-apiserver-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.611134    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.805712    7088 request.go:661] Waited for 194.0364ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100
	I0407 13:08:01.806095    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100
	I0407 13:08:01.806261    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.806261    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.806261    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.815415    7088 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 13:08:02.005978    7088 request.go:661] Waited for 189.5201ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:02.005978    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:02.005978    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.005978    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.005978    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.011991    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:02.012305    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:02.012305    7088 pod_ready.go:82] duration metric: took 401.1692ms for pod "kube-controller-manager-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.012399    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.205408    7088 request.go:661] Waited for 192.937ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m02
	I0407 13:08:02.205834    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m02
	I0407 13:08:02.205834    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.205895    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.205895    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.210213    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:02.406308    7088 request.go:661] Waited for 196.0936ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:02.406308    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:02.406308    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.406850    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.406850    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.414526    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:02.414756    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:02.414756    7088 pod_ready.go:82] duration metric: took 402.3554ms for pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.414756    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.606427    7088 request.go:661] Waited for 191.6705ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m03
	I0407 13:08:02.606427    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m03
	I0407 13:08:02.606427    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.606427    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.606427    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.612043    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:02.805953    7088 request.go:661] Waited for 193.437ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:02.806327    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:02.806408    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.806408    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.806408    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.811259    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:02.811656    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100-m03" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:02.811721    7088 pod_ready.go:82] duration metric: took 396.9632ms for pod "kube-controller-manager-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.811721    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fgqm9" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.005905    7088 request.go:661] Waited for 194.0834ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgqm9
	I0407 13:08:03.005905    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgqm9
	I0407 13:08:03.005905    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.005905    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.005905    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.011391    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:03.206299    7088 request.go:661] Waited for 194.387ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:03.206299    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:03.206299    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.206299    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.206299    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.211731    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:03.212275    7088 pod_ready.go:93] pod "kube-proxy-fgqm9" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:03.212275    7088 pod_ready.go:82] duration metric: took 400.5519ms for pod "kube-proxy-fgqm9" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.212275    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sxkgm" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.406727    7088 request.go:661] Waited for 194.2495ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxkgm
	I0407 13:08:03.407196    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxkgm
	I0407 13:08:03.407196    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.407196    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.407196    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.415711    7088 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0407 13:08:03.606913    7088 request.go:661] Waited for 191.2015ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:03.606913    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:03.606913    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.606913    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.606913    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.614891    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:03.615121    7088 pod_ready.go:93] pod "kube-proxy-sxkgm" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:03.615121    7088 pod_ready.go:82] duration metric: took 402.8445ms for pod "kube-proxy-sxkgm" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.615121    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsgf7" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.805393    7088 request.go:661] Waited for 190.2716ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsgf7
	I0407 13:08:03.805393    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsgf7
	I0407 13:08:03.805393    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.805393    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.805393    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.810650    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:04.005951    7088 request.go:661] Waited for 194.6301ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:04.005951    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:04.005951    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.005951    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.005951    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.013001    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:04.013377    7088 pod_ready.go:93] pod "kube-proxy-xsgf7" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:04.013377    7088 pod_ready.go:82] duration metric: took 398.2546ms for pod "kube-proxy-xsgf7" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.013377    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.205615    7088 request.go:661] Waited for 191.7112ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100
	I0407 13:08:04.205615    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100
	I0407 13:08:04.205615    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.205615    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.205615    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.211655    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:04.406012    7088 request.go:661] Waited for 193.389ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:04.406012    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:04.406012    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.406012    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.406012    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.411154    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:04.411516    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:04.411516    7088 pod_ready.go:82] duration metric: took 398.1366ms for pod "kube-scheduler-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.411658    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.605898    7088 request.go:661] Waited for 194.2394ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m02
	I0407 13:08:04.605898    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m02
	I0407 13:08:04.605898    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.605898    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.605898    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.611981    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:04.805667    7088 request.go:661] Waited for 193.4921ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:04.805667    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:04.805667    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.805667    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.805667    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.814412    7088 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0407 13:08:04.815251    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:04.815318    7088 pod_ready.go:82] duration metric: took 403.6582ms for pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.815376    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:05.005274    7088 request.go:661] Waited for 189.8398ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m03
	I0407 13:08:05.005274    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m03
	I0407 13:08:05.005274    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.005274    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.005274    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.011398    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:05.205349    7088 request.go:661] Waited for 193.2863ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:05.205794    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:05.205853    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.205853    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.205853    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.210655    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:08:05.210765    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100-m03" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:05.210765    7088 pod_ready.go:82] duration metric: took 395.3878ms for pod "kube-scheduler-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:05.210765    7088 pod_ready.go:39] duration metric: took 5.1992208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:08:05.210765    7088 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:08:05.219947    7088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:08:05.250266    7088 api_server.go:72] duration metric: took 26.6969918s to wait for apiserver process to appear ...
	I0407 13:08:05.250266    7088 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:08:05.250431    7088 api_server.go:253] Checking apiserver healthz at https://172.17.95.223:8443/healthz ...
	I0407 13:08:05.257067    7088 api_server.go:279] https://172.17.95.223:8443/healthz returned 200:
	ok
	I0407 13:08:05.257924    7088 round_trippers.go:470] GET https://172.17.95.223:8443/version
	I0407 13:08:05.257924    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.258043    7088 round_trippers.go:480]     Accept: application/json, */*
	I0407 13:08:05.258043    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.260102    7088 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 13:08:05.260274    7088 api_server.go:141] control plane version: v1.32.2
	I0407 13:08:05.260326    7088 api_server.go:131] duration metric: took 10.0606ms to wait for apiserver health ...
	I0407 13:08:05.260326    7088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:08:05.405360    7088 request.go:661] Waited for 144.8775ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:05.405360    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:05.405360    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.405360    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.405360    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.412374    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:05.414612    7088 system_pods.go:59] 24 kube-system pods found
	I0407 13:08:05.414612    7088 system_pods.go:61] "coredns-668d6bf9bc-whpg2" [48faa3ce-0f1f-4c88-8298-15960d3c75a7] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "coredns-668d6bf9bc-z4nkw" [4aa968e7-d945-4f70-932d-b42417702382] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "etcd-ha-573100" [c473d0ab-e66d-4b41-ad43-edce5e371027] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "etcd-ha-573100-m02" [0f05d56b-d0f5-4505-9d54-127111d30d27] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "etcd-ha-573100-m03" [caa1f496-b332-4035-873f-dae22202edc5] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kindnet-fbm5f" [eccfc010-2f51-4693-92da-ce5e71254f88] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kindnet-fxxw5" [4fc9602a-d72f-4421-96a3-a7b0b35e2ce6] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kindnet-vhm9b" [355feff9-5819-4d85-82f0-2281fdcc5d5a] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kube-apiserver-ha-573100" [60830754-3b25-4753-9ec0-d9cef7b7b548] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kube-apiserver-ha-573100-m02" [5fa8bf0c-a2ff-4b0d-8e9f-a42172533517] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kube-apiserver-ha-573100-m03" [2bfa7e7c-87be-4015-b16d-fd6f41383fb1] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kube-controller-manager-ha-573100" [0c4d6f0d-d4ae-40cd-bfa7-b7f39dff081e] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-controller-manager-ha-573100-m02" [cb31520b-fa77-4ceb-a798-c45f10c87d10] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-controller-manager-ha-573100-m03" [2e7deda2-f453-4c3f-b1b9-432cc370678a] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-proxy-fgqm9" [0033554f-f4b8-4c6a-8010-ace3b937df06] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-proxy-sxkgm" [6e0a6f3f-a949-4b95-aaaa-d74c1a7e0efe] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-proxy-xsgf7" [1bccfdb6-28f7-4190-a5a1-9316cfdf215e] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-scheduler-ha-573100" [d46211dc-ab95-474b-abfc-218808a4d1aa] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-scheduler-ha-573100-m02" [1fd3b48a-ef70-4cce-b7d4-24b44331bfba] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-scheduler-ha-573100-m03" [749f4ff2-a63f-4ae7-b6de-d1c2d83b20de] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-vip-ha-573100" [b8e24d1a-1309-482f-9734-99bcf4812448] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-vip-ha-573100-m02" [6e3ad003-a31a-49de-841f-2e21e31f094d] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-vip-ha-573100-m03" [5c76fc47-39e4-487d-a74e-6583cf7fb3e9] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "storage-provisioner" [8d89f971-c575-4089-b12b-823fe7524dc2] Running
	I0407 13:08:05.414738    7088 system_pods.go:74] duration metric: took 154.4112ms to wait for pod list to return data ...
	I0407 13:08:05.414738    7088 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:08:05.606541    7088 request.go:661] Waited for 191.8018ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/default/serviceaccounts
	I0407 13:08:05.606541    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/default/serviceaccounts
	I0407 13:08:05.606999    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.606999    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.606999    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.612556    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:05.612692    7088 default_sa.go:45] found service account: "default"
	I0407 13:08:05.612692    7088 default_sa.go:55] duration metric: took 197.9526ms for default service account to be created ...
	I0407 13:08:05.612811    7088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:08:05.806330    7088 request.go:661] Waited for 193.4397ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:05.806562    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:05.806562    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.806562    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.806562    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.811854    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:05.814110    7088 system_pods.go:86] 24 kube-system pods found
	I0407 13:08:05.814383    7088 system_pods.go:89] "coredns-668d6bf9bc-whpg2" [48faa3ce-0f1f-4c88-8298-15960d3c75a7] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "coredns-668d6bf9bc-z4nkw" [4aa968e7-d945-4f70-932d-b42417702382] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "etcd-ha-573100" [c473d0ab-e66d-4b41-ad43-edce5e371027] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "etcd-ha-573100-m02" [0f05d56b-d0f5-4505-9d54-127111d30d27] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "etcd-ha-573100-m03" [caa1f496-b332-4035-873f-dae22202edc5] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kindnet-fbm5f" [eccfc010-2f51-4693-92da-ce5e71254f88] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kindnet-fxxw5" [4fc9602a-d72f-4421-96a3-a7b0b35e2ce6] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kindnet-vhm9b" [355feff9-5819-4d85-82f0-2281fdcc5d5a] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-apiserver-ha-573100" [60830754-3b25-4753-9ec0-d9cef7b7b548] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-apiserver-ha-573100-m02" [5fa8bf0c-a2ff-4b0d-8e9f-a42172533517] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-apiserver-ha-573100-m03" [2bfa7e7c-87be-4015-b16d-fd6f41383fb1] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-controller-manager-ha-573100" [0c4d6f0d-d4ae-40cd-bfa7-b7f39dff081e] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-controller-manager-ha-573100-m02" [cb31520b-fa77-4ceb-a798-c45f10c87d10] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-controller-manager-ha-573100-m03" [2e7deda2-f453-4c3f-b1b9-432cc370678a] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-proxy-fgqm9" [0033554f-f4b8-4c6a-8010-ace3b937df06] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-proxy-sxkgm" [6e0a6f3f-a949-4b95-aaaa-d74c1a7e0efe] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-proxy-xsgf7" [1bccfdb6-28f7-4190-a5a1-9316cfdf215e] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-scheduler-ha-573100" [d46211dc-ab95-474b-abfc-218808a4d1aa] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-scheduler-ha-573100-m02" [1fd3b48a-ef70-4cce-b7d4-24b44331bfba] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-scheduler-ha-573100-m03" [749f4ff2-a63f-4ae7-b6de-d1c2d83b20de] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-vip-ha-573100" [b8e24d1a-1309-482f-9734-99bcf4812448] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-vip-ha-573100-m02" [6e3ad003-a31a-49de-841f-2e21e31f094d] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-vip-ha-573100-m03" [5c76fc47-39e4-487d-a74e-6583cf7fb3e9] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "storage-provisioner" [8d89f971-c575-4089-b12b-823fe7524dc2] Running
	I0407 13:08:05.814383    7088 system_pods.go:126] duration metric: took 201.5712ms to wait for k8s-apps to be running ...
	I0407 13:08:05.814383    7088 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:08:05.825526    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:08:05.848747    7088 system_svc.go:56] duration metric: took 34.3638ms WaitForService to wait for kubelet
	I0407 13:08:05.848747    7088 kubeadm.go:582] duration metric: took 27.2954707s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:08:05.848812    7088 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:08:06.006134    7088 request.go:661] Waited for 157.1876ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes
	I0407 13:08:06.006134    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes
	I0407 13:08:06.006134    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:06.006134    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:06.006134    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:06.012010    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:06.012010    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:08:06.012616    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:08:06.012616    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:08:06.012616    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:08:06.012616    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:08:06.012616    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:08:06.012616    7088 node_conditions.go:105] duration metric: took 163.8027ms to run NodePressure ...
	I0407 13:08:06.012726    7088 start.go:241] waiting for startup goroutines ...
	I0407 13:08:06.012726    7088 start.go:255] writing updated cluster config ...
	I0407 13:08:06.024480    7088 ssh_runner.go:195] Run: rm -f paused
	I0407 13:08:06.166146    7088 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 13:08:06.171363    7088 out.go:177] * Done! kubectl is now configured to use "ha-573100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.835447784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.919154667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.919286468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.919303868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.939051282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:26 ha-573100 cri-dockerd[1356]: time="2025-04-07T13:00:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/445e1a78b6431a0d71140de96c13a77c9d52d9223e948af86963ba710b439534/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 13:00:26 ha-573100 cri-dockerd[1356]: time="2025-04-07T13:00:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5e4d570b4f2c2584d899cda49b00a4d1370c51ee3637f62bd43b148d44abf06/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.505815051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.505957952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.506067752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.508339068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.581174972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.581299872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.581312273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.581476074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:08:45 ha-573100 dockerd[1465]: time="2025-04-07T13:08:45.095486781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:08:45 ha-573100 dockerd[1465]: time="2025-04-07T13:08:45.095629381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:08:45 ha-573100 dockerd[1465]: time="2025-04-07T13:08:45.095648281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:08:45 ha-573100 dockerd[1465]: time="2025-04-07T13:08:45.095920882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:08:45 ha-573100 cri-dockerd[1356]: time="2025-04-07T13:08:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b6426d1240b751896b681cc7894d8a9bafa41a6d27f50fe9a91982928cecea31/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 07 13:08:47 ha-573100 cri-dockerd[1356]: time="2025-04-07T13:08:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 07 13:08:47 ha-573100 dockerd[1465]: time="2025-04-07T13:08:47.520545882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:08:47 ha-573100 dockerd[1465]: time="2025-04-07T13:08:47.521400590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:08:47 ha-573100 dockerd[1465]: time="2025-04-07T13:08:47.521568792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:08:47 ha-573100 dockerd[1465]: time="2025-04-07T13:08:47.521673093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	06b5f6c977b06       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   About a minute ago   Running             busybox                   0                   b6426d1240b75       busybox-58667487b6-tj2cw
	a02d067ca0257       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   b5e4d570b4f2c       coredns-668d6bf9bc-whpg2
	b26f43fa5c1ed       c69fa2e9cbf5f                                                                                         9 minutes ago        Running             coredns                   0                   445e1a78b6431       coredns-668d6bf9bc-z4nkw
	61fc0b71fca43       6e38f40d628db                                                                                         9 minutes ago        Running             storage-provisioner       0                   974692fdcacb6       storage-provisioner
	ff53930de566d       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              9 minutes ago        Running             kindnet-cni               0                   23727db8abce4       kindnet-vhm9b
	0c75c161a6626       f1332858868e1                                                                                         9 minutes ago        Running             kube-proxy                0                   65b0ccd3b332a       kube-proxy-xsgf7
	31ba7c7d935d0       ghcr.io/kube-vip/kube-vip@sha256:e01c90bcdd3eb37a46aaf04f6c86cca3e66dd0db7a231f3c8e8aa105635c158a     9 minutes ago        Running             kube-vip                  0                   f928ea89ee802       kube-vip-ha-573100
	bad0116ca1089       d8e673e7c9983                                                                                         10 minutes ago       Running             kube-scheduler            0                   6b7e896091c3e       kube-scheduler-ha-573100
	bba5768a9eb4d       85b7a174738ba                                                                                         10 minutes ago       Running             kube-apiserver            0                   b7c20f9e9ccc7       kube-apiserver-ha-573100
	9dc6d594af6db       b6a454c5a800d                                                                                         10 minutes ago       Running             kube-controller-manager   0                   3f1e795485f06       kube-controller-manager-ha-573100
	0ad6c1a3c3233       a9e7e6b294baf                                                                                         10 minutes ago       Running             etcd                      0                   8094c085641d0       etcd-ha-573100
	
	
	==> coredns [a02d067ca025] <==
	[INFO] plugin/reload: Running configuration SHA512 = 52f38634f47d27a60a843ea08b564c25eb754b24bbf06ec66f8366b52e126543ce16cee7cc062958162af0c89604123ac00e3f032b67ea2f0f7eb90c30818844
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59124 - 48887 "HINFO IN 1279938540662885478.338108461797422407. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.083715361s
	[INFO] 10.244.1.2:51285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.100440479s
	[INFO] 10.244.2.2:46629 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.06263721s
	[INFO] 10.244.0.4:47614 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.247363626s
	[INFO] 10.244.1.2:43452 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011375511s
	[INFO] 10.244.1.2:33255 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109901s
	[INFO] 10.244.1.2:32770 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000301403s
	[INFO] 10.244.2.2:45044 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006079259s
	[INFO] 10.244.2.2:39530 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107401s
	[INFO] 10.244.2.2:55103 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218902s
	[INFO] 10.244.2.2:34607 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103001s
	[INFO] 10.244.0.4:42722 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000148502s
	[INFO] 10.244.0.4:48147 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000304603s
	[INFO] 10.244.1.2:55020 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000348003s
	[INFO] 10.244.2.2:54788 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179002s
	[INFO] 10.244.0.4:52200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228203s
	[INFO] 10.244.0.4:44262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213903s
	[INFO] 10.244.0.4:44282 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168102s
	[INFO] 10.244.1.2:56283 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000237302s
	[INFO] 10.244.2.2:47768 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000092101s
	[INFO] 10.244.0.4:53940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000291803s
	[INFO] 10.244.0.4:46900 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125701s
	
	
	==> coredns [b26f43fa5c1e] <==
	[INFO] 10.244.2.2:49769 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183502s
	[INFO] 10.244.2.2:46536 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179701s
	[INFO] 10.244.2.2:33107 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107401s
	[INFO] 10.244.2.2:41774 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107402s
	[INFO] 10.244.0.4:46869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000305903s
	[INFO] 10.244.0.4:41513 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000347403s
	[INFO] 10.244.0.4:57173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029613287s
	[INFO] 10.244.0.4:58270 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119901s
	[INFO] 10.244.0.4:33927 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000186402s
	[INFO] 10.244.0.4:44042 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079301s
	[INFO] 10.244.1.2:43651 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204102s
	[INFO] 10.244.1.2:49283 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189302s
	[INFO] 10.244.1.2:52162 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000747s
	[INFO] 10.244.2.2:57433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136201s
	[INFO] 10.244.2.2:51627 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212203s
	[INFO] 10.244.2.2:32807 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069201s
	[INFO] 10.244.0.4:54052 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155801s
	[INFO] 10.244.1.2:54124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189602s
	[INFO] 10.244.1.2:58803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000187501s
	[INFO] 10.244.1.2:46708 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207902s
	[INFO] 10.244.2.2:36414 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221503s
	[INFO] 10.244.2.2:35259 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222302s
	[INFO] 10.244.2.2:35502 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108201s
	[INFO] 10.244.0.4:32882 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000233503s
	[INFO] 10.244.0.4:33670 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000127102s
	
	
	==> describe nodes <==
	Name:               ha-573100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-573100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=ha-573100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T13_00_00_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-573100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:09:50 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:09:00 +0000   Mon, 07 Apr 2025 12:59:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:09:00 +0000   Mon, 07 Apr 2025 12:59:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:09:00 +0000   Mon, 07 Apr 2025 12:59:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:09:00 +0000   Mon, 07 Apr 2025 13:00:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.95.223
	  Hostname:    ha-573100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bf27969232e484f9333d5db0fe4ff8e
	  System UUID:                a244b224-8deb-a04f-b638-26a3468cc88e
	  Boot ID:                    b7643801-8375-43e7-a33f-969d88d1e272
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-tj2cw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 coredns-668d6bf9bc-whpg2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m48s
	  kube-system                 coredns-668d6bf9bc-z4nkw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     9m48s
	  kube-system                 etcd-ha-573100                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         9m54s
	  kube-system                 kindnet-vhm9b                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      9m48s
	  kube-system                 kube-apiserver-ha-573100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 kube-controller-manager-ha-573100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 kube-proxy-xsgf7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m48s
	  kube-system                 kube-scheduler-ha-573100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         9m53s
	  kube-system                 kube-vip-ha-573100                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m56s
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         9m40s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 9m46s              kube-proxy       
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)  kubelet          Node ha-573100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  10m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)  kubelet          Node ha-573100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)  kubelet          Node ha-573100 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 9m53s              kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  9m53s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  9m53s              kubelet          Node ha-573100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m53s              kubelet          Node ha-573100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m53s              kubelet          Node ha-573100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           9m49s              node-controller  Node ha-573100 event: Registered Node ha-573100 in Controller
	  Normal  NodeReady                9m29s              kubelet          Node ha-573100 status is now: NodeReady
	  Normal  RegisteredNode           6m7s               node-controller  Node ha-573100 event: Registered Node ha-573100 in Controller
	  Normal  RegisteredNode           2m9s               node-controller  Node ha-573100 event: Registered Node ha-573100 in Controller
	
	
	Name:               ha-573100-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-573100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=ha-573100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_07T13_03_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:03:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-573100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:09:52 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:09:11 +0000   Mon, 07 Apr 2025 13:03:33 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:09:11 +0000   Mon, 07 Apr 2025 13:03:33 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:09:11 +0000   Mon, 07 Apr 2025 13:03:33 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:09:11 +0000   Mon, 07 Apr 2025 13:04:02 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.82.162
	  Hostname:    ha-573100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 16d352d015944a28a8cbdb5d22377f2b
	  System UUID:                20eb10cf-52ab-d249-9c07-7fd1050910cc
	  Boot ID:                    8202eddd-270b-492a-bed7-b8635542d451
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-gtkbk                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 etcd-ha-573100-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m17s
	  kube-system                 kindnet-fxxw5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      6m19s
	  kube-system                 kube-apiserver-ha-573100-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-controller-manager-ha-573100-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-proxy-sxkgm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m19s
	  kube-system                 kube-scheduler-ha-573100-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m17s
	  kube-system                 kube-vip-ha-573100-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m17s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m13s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  6m19s (x8 over 6m19s)  kubelet          Node ha-573100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m19s (x8 over 6m19s)  kubelet          Node ha-573100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m19s (x7 over 6m19s)  kubelet          Node ha-573100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m14s                  node-controller  Node ha-573100-m02 event: Registered Node ha-573100-m02 in Controller
	  Normal  RegisteredNode           6m7s                   node-controller  Node ha-573100-m02 event: Registered Node ha-573100-m02 in Controller
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-573100-m02 event: Registered Node ha-573100-m02 in Controller
	
	
	Name:               ha-573100-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-573100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=ha-573100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_07T13_07_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:07:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-573100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:09:44 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:09:02 +0000   Mon, 07 Apr 2025 13:07:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:09:02 +0000   Mon, 07 Apr 2025 13:07:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:09:02 +0000   Mon, 07 Apr 2025 13:07:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:09:02 +0000   Mon, 07 Apr 2025 13:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.94.27
	  Hostname:    ha-573100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3278c56cde744459d2e0158ad4d0d5d
	  System UUID:                9ab42d67-ccec-6745-a537-30243250ed15
	  Boot ID:                    8a3ff060-2959-44eb-8288-f4f7d48d3c5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-szx9k                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         69s
	  kube-system                 etcd-ha-573100-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m20s
	  kube-system                 kindnet-fbm5f                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      2m21s
	  kube-system                 kube-apiserver-ha-573100-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-controller-manager-ha-573100-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-proxy-fgqm9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m21s
	  kube-system                 kube-scheduler-ha-573100-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         2m20s
	  kube-system                 kube-vip-ha-573100-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         2m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  2m22s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  2m21s (x8 over 2m22s)  kubelet          Node ha-573100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m21s (x8 over 2m22s)  kubelet          Node ha-573100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     2m21s (x7 over 2m22s)  kubelet          Node ha-573100-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m19s                  node-controller  Node ha-573100-m03 event: Registered Node ha-573100-m03 in Controller
	  Normal  RegisteredNode           2m17s                  node-controller  Node ha-573100-m03 event: Registered Node ha-573100-m03 in Controller
	  Normal  RegisteredNode           2m9s                   node-controller  Node ha-573100-m03 event: Registered Node ha-573100-m03 in Controller
	
	
	==> dmesg <==
	[  +1.798118] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[Apr 7 12:58] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +47.727213] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.181165] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[Apr 7 12:59] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	[  +0.087809] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.492769] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.203876] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +0.216509] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +2.835051] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.177472] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.177083] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +0.262900] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[ +11.578949] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
	[  +0.110688] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.501132] systemd-fstab-generator[1716]: Ignoring "noauto" option for root device
	[  +6.236247] systemd-fstab-generator[1861]: Ignoring "noauto" option for root device
	[  +0.093827] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.412978] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.605904] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[Apr 7 13:00] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.314204] kauditd_printk_skb: 29 callbacks suppressed
	[Apr 7 13:03] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [0ad6c1a3c323] <==
	{"level":"info","ts":"2025-04-07T13:07:34.860957Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"926100769fc6b980","remote-peer-id":"e8e4bde5d058f2fd"}
	{"level":"info","ts":"2025-04-07T13:07:34.870312Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"926100769fc6b980","remote-peer-id":"e8e4bde5d058f2fd"}
	{"level":"info","ts":"2025-04-07T13:07:34.895786Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"926100769fc6b980","remote-peer-id":"e8e4bde5d058f2fd"}
	{"level":"info","ts":"2025-04-07T13:07:34.903506Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"926100769fc6b980","to":"e8e4bde5d058f2fd","stream-type":"stream Message"}
	{"level":"info","ts":"2025-04-07T13:07:34.903618Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"926100769fc6b980","remote-peer-id":"e8e4bde5d058f2fd"}
	{"level":"warn","ts":"2025-04-07T13:07:35.677994Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e8e4bde5d058f2fd","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-04-07T13:07:36.678632Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"e8e4bde5d058f2fd","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2025-04-07T13:07:37.182191Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"141.908868ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T13:07:37.182305Z","caller":"traceutil/trace.go:171","msg":"trace[831205899] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1555; }","duration":"142.037669ms","start":"2025-04-07T13:07:37.040238Z","end":"2025-04-07T13:07:37.182275Z","steps":["trace[831205899] 'range keys from in-memory index tree'  (duration: 140.527763ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T13:07:37.196516Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"926100769fc6b980 switched to configuration voters=(9582196246384953831 10547712311765154176 16781746906229961469)"}
	{"level":"info","ts":"2025-04-07T13:07:37.197211Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"a636acee123da6f7","local-member-id":"926100769fc6b980"}
	{"level":"info","ts":"2025-04-07T13:07:37.208283Z","caller":"etcdserver/server.go:2018","msg":"applied a configuration change through raft","local-member-id":"926100769fc6b980","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"e8e4bde5d058f2fd"}
	{"level":"info","ts":"2025-04-07T13:07:37.440922Z","caller":"traceutil/trace.go:171","msg":"trace[2092561456] transaction","detail":"{read_only:false; response_revision:1556; number_of_response:1; }","duration":"131.535234ms","start":"2025-04-07T13:07:37.309370Z","end":"2025-04-07T13:07:37.440905Z","steps":["trace[2092561456] 'process raft request'  (duration: 131.427833ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T13:07:42.845393Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"e8e4bde5d058f2fd","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"159.818488ms"}
	{"level":"warn","ts":"2025-04-07T13:07:42.845469Z","caller":"etcdserver/raft.go:426","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"84faccb7a9db49e7","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"159.898188ms"}
	{"level":"info","ts":"2025-04-07T13:07:42.858351Z","caller":"traceutil/trace.go:171","msg":"trace[137932529] transaction","detail":"{read_only:false; response_revision:1579; number_of_response:1; }","duration":"231.641664ms","start":"2025-04-07T13:07:42.626691Z","end":"2025-04-07T13:07:42.858333Z","steps":["trace[137932529] 'process raft request'  (duration: 231.338863ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T13:07:42.858723Z","caller":"traceutil/trace.go:171","msg":"trace[455351850] linearizableReadLoop","detail":"{readStateIndex:1762; appliedIndex:1764; }","duration":"334.435605ms","start":"2025-04-07T13:07:42.524278Z","end":"2025-04-07T13:07:42.858714Z","steps":["trace[455351850] 'read index received'  (duration: 334.415105ms)","trace[455351850] 'applied index is now lower than readState.Index'  (duration: 2.9µs)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T13:07:42.858868Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"334.596305ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-573100-m03\" limit:1 ","response":"range_response_count:1 size:4442"}
	{"level":"info","ts":"2025-04-07T13:07:42.858896Z","caller":"traceutil/trace.go:171","msg":"trace[1019942844] range","detail":"{range_begin:/registry/minions/ha-573100-m03; range_end:; response_count:1; response_revision:1579; }","duration":"334.664805ms","start":"2025-04-07T13:07:42.524224Z","end":"2025-04-07T13:07:42.858889Z","steps":["trace[1019942844] 'agreement among raft nodes before linearized reading'  (duration: 334.525605ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T13:07:42.858919Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T13:07:42.524130Z","time spent":"334.781505ms","remote":"127.0.0.1:58352","response type":"/etcdserverpb.KV/Range","request count":0,"request size":35,"response count":1,"response size":4466,"request content":"key:\"/registry/minions/ha-573100-m03\" limit:1 "}
	{"level":"warn","ts":"2025-04-07T13:07:43.150242Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"107.001253ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2025-04-07T13:07:43.150467Z","caller":"traceutil/trace.go:171","msg":"trace[814726504] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1580; }","duration":"108.026257ms","start":"2025-04-07T13:07:43.042328Z","end":"2025-04-07T13:07:43.150354Z","steps":["trace[814726504] 'range keys from in-memory index tree'  (duration: 105.141947ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T13:07:43.150878Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"128.981526ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-573100-m03\" limit:1 ","response":"range_response_count:1 size:4442"}
	{"level":"info","ts":"2025-04-07T13:07:43.150909Z","caller":"traceutil/trace.go:171","msg":"trace[668921290] range","detail":"{range_begin:/registry/minions/ha-573100-m03; range_end:; response_count:1; response_revision:1580; }","duration":"129.015126ms","start":"2025-04-07T13:07:43.021884Z","end":"2025-04-07T13:07:43.150899Z","steps":["trace[668921290] 'range keys from in-memory index tree'  (duration: 127.506121ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T13:08:44.749738Z","caller":"traceutil/trace.go:171","msg":"trace[1787876185] transaction","detail":"{read_only:false; response_revision:1841; number_of_response:1; }","duration":"124.460715ms","start":"2025-04-07T13:08:44.625260Z","end":"2025-04-07T13:08:44.749721Z","steps":["trace[1787876185] 'process raft request'  (duration: 71.96934ms)","trace[1787876185] 'compare'  (duration: 52.345575ms)"],"step_count":2}
	
	
	==> kernel <==
	 13:09:52 up 12 min,  0 users,  load average: 0.31, 0.38, 0.25
	Linux ha-573100 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ff53930de566] <==
	I0407 13:09:03.331370       1 main.go:324] Node ha-573100-m03 has CIDR [10.244.2.0/24] 
	I0407 13:09:13.332300       1 main.go:297] Handling node with IPs: map[172.17.95.223:{}]
	I0407 13:09:13.332467       1 main.go:301] handling current node
	I0407 13:09:13.332568       1 main.go:297] Handling node with IPs: map[172.17.82.162:{}]
	I0407 13:09:13.332760       1 main.go:324] Node ha-573100-m02 has CIDR [10.244.1.0/24] 
	I0407 13:09:13.333052       1 main.go:297] Handling node with IPs: map[172.17.94.27:{}]
	I0407 13:09:13.333272       1 main.go:324] Node ha-573100-m03 has CIDR [10.244.2.0/24] 
	I0407 13:09:23.335216       1 main.go:297] Handling node with IPs: map[172.17.95.223:{}]
	I0407 13:09:23.335318       1 main.go:301] handling current node
	I0407 13:09:23.335339       1 main.go:297] Handling node with IPs: map[172.17.82.162:{}]
	I0407 13:09:23.335347       1 main.go:324] Node ha-573100-m02 has CIDR [10.244.1.0/24] 
	I0407 13:09:23.335971       1 main.go:297] Handling node with IPs: map[172.17.94.27:{}]
	I0407 13:09:23.336075       1 main.go:324] Node ha-573100-m03 has CIDR [10.244.2.0/24] 
	I0407 13:09:33.335409       1 main.go:297] Handling node with IPs: map[172.17.94.27:{}]
	I0407 13:09:33.335544       1 main.go:324] Node ha-573100-m03 has CIDR [10.244.2.0/24] 
	I0407 13:09:33.335978       1 main.go:297] Handling node with IPs: map[172.17.95.223:{}]
	I0407 13:09:33.336038       1 main.go:301] handling current node
	I0407 13:09:33.336074       1 main.go:297] Handling node with IPs: map[172.17.82.162:{}]
	I0407 13:09:33.336407       1 main.go:324] Node ha-573100-m02 has CIDR [10.244.1.0/24] 
	I0407 13:09:43.331304       1 main.go:297] Handling node with IPs: map[172.17.95.223:{}]
	I0407 13:09:43.331408       1 main.go:301] handling current node
	I0407 13:09:43.331429       1 main.go:297] Handling node with IPs: map[172.17.82.162:{}]
	I0407 13:09:43.331438       1 main.go:324] Node ha-573100-m02 has CIDR [10.244.1.0/24] 
	I0407 13:09:43.331731       1 main.go:297] Handling node with IPs: map[172.17.94.27:{}]
	I0407 13:09:43.331749       1 main.go:324] Node ha-573100-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [bba5768a9eb4] <==
	I0407 12:59:57.983721       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 12:59:58.581732       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 12:59:59.237353       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 12:59:59.259824       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0407 12:59:59.275422       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 13:00:03.890771       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0407 13:00:04.033269       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0407 13:07:31.933625       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="7µs" method="PATCH" path="/api/v1/namespaces/default/events/ha-573100-m03.18340b2eb74566b8" result=null
	E0407 13:07:31.933798       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 26.1µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0407 13:07:31.937967       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="PATCH" URI="/api/v1/namespaces/default/events/ha-573100-m03.18340b2eb74566b8" auditID="17c35ea3-97a7-4486-b4a9-7e57e93a5e49"
	E0407 13:08:52.418923       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55010: use of closed network connection
	E0407 13:08:53.028768       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55012: use of closed network connection
	E0407 13:08:54.834355       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55014: use of closed network connection
	E0407 13:08:55.507349       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55016: use of closed network connection
	E0407 13:08:56.041504       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55018: use of closed network connection
	E0407 13:08:56.584069       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55021: use of closed network connection
	E0407 13:08:57.096494       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55023: use of closed network connection
	E0407 13:08:57.619763       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55025: use of closed network connection
	E0407 13:08:58.162862       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55027: use of closed network connection
	E0407 13:08:59.080617       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55030: use of closed network connection
	E0407 13:09:09.593391       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55032: use of closed network connection
	E0407 13:09:10.114044       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55034: use of closed network connection
	E0407 13:09:20.603647       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55036: use of closed network connection
	E0407 13:09:21.090186       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55039: use of closed network connection
	E0407 13:09:31.580893       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55041: use of closed network connection
	
	
	==> kube-controller-manager [9dc6d594af6d] <==
	I0407 13:07:43.808400       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m03"
	I0407 13:07:49.834408       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m02"
	I0407 13:07:59.585938       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m03"
	I0407 13:07:59.627581       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m03"
	I0407 13:08:00.468051       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m03"
	I0407 13:08:01.763382       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m03"
	I0407 13:08:43.930290       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="184.837317ms"
	I0407 13:08:43.991953       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="61.116304ms"
	I0407 13:08:44.404029       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="412.007674ms"
	I0407 13:08:44.453778       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="49.619965ms"
	I0407 13:08:44.527723       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.422932ms"
	I0407 13:08:44.528048       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="70.5µs"
	I0407 13:08:44.751724       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="133.767446ms"
	I0407 13:08:44.753797       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="33.3µs"
	I0407 13:08:48.548498       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="183.676897ms"
	E0407 13:08:48.548688       1 replica_set.go:560] "Unhandled Error" err="sync \"default/busybox-58667487b6\" failed with Operation cannot be fulfilled on replicasets.apps \"busybox-58667487b6\": the object has been modified; please apply your changes to the latest version and try again" logger="UnhandledError"
	I0407 13:08:48.550313       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="119.401µs"
	I0407 13:08:48.555965       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="218.302µs"
	I0407 13:08:48.633440       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="42.439115ms"
	I0407 13:08:48.634228       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="69.701µs"
	I0407 13:08:49.419064       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="43.471725ms"
	I0407 13:08:49.419614       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="92.201µs"
	I0407 13:09:00.417837       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100"
	I0407 13:09:02.771418       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m03"
	I0407 13:09:11.251321       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m02"
	
	
	==> kube-proxy [0c75c161a662] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 13:00:05.730778       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 13:00:05.789031       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.17.95.223"]
	E0407 13:00:05.789676       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 13:00:05.849207       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 13:00:05.849339       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 13:00:05.849373       1 server_linux.go:170] "Using iptables Proxier"
	I0407 13:00:05.854013       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 13:00:05.857233       1 server.go:497] "Version info" version="v1.32.2"
	I0407 13:00:05.857273       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 13:00:05.870851       1 config.go:199] "Starting service config controller"
	I0407 13:00:05.870878       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 13:00:05.871069       1 config.go:105] "Starting endpoint slice config controller"
	I0407 13:00:05.871182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 13:00:05.876312       1 config.go:329] "Starting node config controller"
	I0407 13:00:05.876398       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 13:00:05.972226       1 shared_informer.go:320] Caches are synced for service config
	I0407 13:00:05.972226       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 13:00:05.976621       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bad0116ca108] <==
	W0407 12:59:57.069862       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 12:59:57.069915       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:59:57.084489       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 12:59:57.084519       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 12:59:57.097941       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 12:59:57.097984       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:59:57.161350       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0407 12:59:57.161455       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:59:57.186004       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0407 12:59:57.186050       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:59:57.260831       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:59:57.260872       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0407 12:59:59.775675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0407 13:07:31.323960       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nwwzq\": pod kube-proxy-nwwzq is already assigned to node \"ha-573100-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nwwzq" node="ha-573100-m03"
	E0407 13:07:31.325584       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fbm5f\": pod kindnet-fbm5f is already assigned to node \"ha-573100-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-fbm5f" node="ha-573100-m03"
	E0407 13:07:31.330201       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod eccfc010-2f51-4693-92da-ce5e71254f88(kube-system/kindnet-fbm5f) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fbm5f"
	E0407 13:07:31.332517       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fbm5f\": pod kindnet-fbm5f is already assigned to node \"ha-573100-m03\"" pod="kube-system/kindnet-fbm5f"
	I0407 13:07:31.332551       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fbm5f" node="ha-573100-m03"
	E0407 13:07:31.330273       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod d49c5523-bbfb-495b-bba3-b60a86f646fb(kube-system/kube-proxy-nwwzq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nwwzq"
	E0407 13:07:31.334602       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nwwzq\": pod kube-proxy-nwwzq is already assigned to node \"ha-573100-m03\"" pod="kube-system/kube-proxy-nwwzq"
	I0407 13:07:31.334995       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nwwzq" node="ha-573100-m03"
	E0407 13:07:31.323946       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-q89td\": pod kindnet-q89td is already assigned to node \"ha-573100-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-q89td" node="ha-573100-m03"
	E0407 13:07:31.335491       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod e32634bd-0382-43ca-bf16-77fe3f5b7fef(kube-system/kindnet-q89td) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-q89td"
	E0407 13:07:31.335565       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-q89td\": pod kindnet-q89td is already assigned to node \"ha-573100-m03\"" pod="kube-system/kindnet-q89td"
	I0407 13:07:31.335665       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-q89td" node="ha-573100-m03"
	
	
	==> kubelet <==
	Apr 07 13:04:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:04:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:04:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 13:05:59 ha-573100 kubelet[2389]: E0407 13:05:59.317581    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 13:05:59 ha-573100 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 13:05:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:05:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:05:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 13:06:59 ha-573100 kubelet[2389]: E0407 13:06:59.319844    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 13:06:59 ha-573100 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 13:06:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:06:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:06:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 13:07:59 ha-573100 kubelet[2389]: E0407 13:07:59.317962    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 13:07:59 ha-573100 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 13:07:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:07:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:07:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 13:08:44 ha-573100 kubelet[2389]: I0407 13:08:44.076457    2389 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s4wxf\" (UniqueName: \"kubernetes.io/projected/b62fd6bf-dadf-4751-845c-1c1bf8382fd8-kube-api-access-s4wxf\") pod \"busybox-58667487b6-tj2cw\" (UID: \"b62fd6bf-dadf-4751-845c-1c1bf8382fd8\") " pod="default/busybox-58667487b6-tj2cw"
	Apr 07 13:08:45 ha-573100 kubelet[2389]: I0407 13:08:45.301592    2389 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b6426d1240b751896b681cc7894d8a9bafa41a6d27f50fe9a91982928cecea31"
	Apr 07 13:08:59 ha-573100 kubelet[2389]: E0407 13:08:59.322934    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 13:08:59 ha-573100 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 13:08:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:08:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:08:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-573100 -n ha-573100
E0407 13:09:58.820770    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-573100 -n ha-573100: (12.3915803s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-573100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (69.15s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (92.84s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:365: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 node stop m02 -v=7 --alsologtostderr
ha_test.go:365: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 node stop m02 -v=7 --alsologtostderr: (35.7471937s)
ha_test.go:371: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 status -v=7 --alsologtostderr
E0407 13:26:38.828707    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:26:55.746187    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:371: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p ha-573100 status -v=7 --alsologtostderr: exit status 1 (21.2902194s)

                                                
                                                
** stderr ** 
	I0407 13:26:35.387225   13120 out.go:345] Setting OutFile to fd 1708 ...
	I0407 13:26:35.466207   13120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:26:35.466207   13120 out.go:358] Setting ErrFile to fd 1432...
	I0407 13:26:35.466207   13120 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:26:35.483651   13120 out.go:352] Setting JSON to false
	I0407 13:26:35.483651   13120 mustload.go:65] Loading cluster: ha-573100
	I0407 13:26:35.483835   13120 notify.go:220] Checking for updates...
	I0407 13:26:35.484663   13120 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:26:35.484743   13120 status.go:174] checking status of ha-573100 ...
	I0407 13:26:35.485777   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:26:37.824788   13120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:26:37.824866   13120 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:26:37.824866   13120 status.go:371] ha-573100 host status = "Running" (err=<nil>)
	I0407 13:26:37.824866   13120 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:26:37.825664   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:26:40.104716   13120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:26:40.104716   13120 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:26:40.104716   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:26:42.765300   13120 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:26:42.765388   13120 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:26:42.765388   13120 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:26:42.777314   13120 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 13:26:42.778347   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:26:44.955170   13120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:26:44.955170   13120 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:26:44.955412   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:26:47.576292   13120 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:26:47.576292   13120 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:26:47.576556   13120 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:26:47.681810   13120 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (4.9044683s)
	I0407 13:26:47.693799   13120 ssh_runner.go:195] Run: systemctl --version
	I0407 13:26:47.714748   13120 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:26:47.738836   13120 kubeconfig.go:125] found "ha-573100" server: "https://172.17.95.254:8443"
	I0407 13:26:47.738836   13120 api_server.go:166] Checking apiserver status ...
	I0407 13:26:47.749796   13120 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:26:47.786015   13120 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2256/cgroup
	W0407 13:26:47.809059   13120 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2256/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 13:26:47.820905   13120 ssh_runner.go:195] Run: ls
	I0407 13:26:47.827633   13120 api_server.go:253] Checking apiserver healthz at https://172.17.95.254:8443/healthz ...
	I0407 13:26:47.835110   13120 api_server.go:279] https://172.17.95.254:8443/healthz returned 200:
	ok
	I0407 13:26:47.835183   13120 status.go:463] ha-573100 apiserver status = Running (err=<nil>)
	I0407 13:26:47.835326   13120 status.go:176] ha-573100 status: &{Name:ha-573100 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:26:47.835362   13120 status.go:174] checking status of ha-573100-m02 ...
	I0407 13:26:47.836202   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:26:50.046917   13120 main.go:141] libmachine: [stdout =====>] : Off
	
	I0407 13:26:50.046917   13120 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:26:50.047192   13120 status.go:371] ha-573100-m02 host status = "Stopped" (err=<nil>)
	I0407 13:26:50.047192   13120 status.go:384] host is not running, skipping remaining checks
	I0407 13:26:50.047192   13120 status.go:176] ha-573100-m02 status: &{Name:ha-573100-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 13:26:50.047285   13120 status.go:174] checking status of ha-573100-m03 ...
	I0407 13:26:50.048186   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:26:52.339985   13120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:26:52.339985   13120 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:26:52.339985   13120 status.go:371] ha-573100-m03 host status = "Running" (err=<nil>)
	I0407 13:26:52.339985   13120 host.go:66] Checking if "ha-573100-m03" exists ...
	I0407 13:26:52.341268   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:26:54.558545   13120 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:26:54.558545   13120 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:26:54.558545   13120 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]

                                                
                                                
** /stderr **
ha_test.go:374: failed to run minikube status. args "out/minikube-windows-amd64.exe -p ha-573100 status -v=7 --alsologtostderr" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-573100 -n ha-573100
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p ha-573100 -n ha-573100: (12.3730195s)
helpers_test.go:244: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 logs -n 25: (8.8555789s)
helpers_test.go:252: TestMultiControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |  Profile  |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	| cp      | ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:21 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100-m03.txt |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n                                                                                                         | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:21 UTC |
	|         | ha-573100-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:21 UTC |
	|         | ha-573100:/home/docker/cp-test_ha-573100-m03_ha-573100.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n                                                                                                         | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:21 UTC | 07 Apr 25 13:22 UTC |
	|         | ha-573100-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n ha-573100 sudo cat                                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:22 UTC | 07 Apr 25 13:22 UTC |
	|         | /home/docker/cp-test_ha-573100-m03_ha-573100.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:22 UTC | 07 Apr 25 13:22 UTC |
	|         | ha-573100-m02:/home/docker/cp-test_ha-573100-m03_ha-573100-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n                                                                                                         | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:22 UTC | 07 Apr 25 13:22 UTC |
	|         | ha-573100-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n ha-573100-m02 sudo cat                                                                                  | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:22 UTC | 07 Apr 25 13:22 UTC |
	|         | /home/docker/cp-test_ha-573100-m03_ha-573100-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:22 UTC | 07 Apr 25 13:23 UTC |
	|         | ha-573100-m04:/home/docker/cp-test_ha-573100-m03_ha-573100-m04.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n                                                                                                         | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:23 UTC |
	|         | ha-573100-m03 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n ha-573100-m04 sudo cat                                                                                  | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:23 UTC |
	|         | /home/docker/cp-test_ha-573100-m03_ha-573100-m04.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-573100 cp testdata\cp-test.txt                                                                                        | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:23 UTC |
	|         | ha-573100-m04:/home/docker/cp-test.txt                                                                                   |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n                                                                                                         | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:23 UTC |
	|         | ha-573100-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:23 UTC | 07 Apr 25 13:24 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100-m04.txt |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n                                                                                                         | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | ha-573100-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| cp      | ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | ha-573100:/home/docker/cp-test_ha-573100-m04_ha-573100.txt                                                               |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n                                                                                                         | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | ha-573100-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n ha-573100 sudo cat                                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:24 UTC |
	|         | /home/docker/cp-test_ha-573100-m04_ha-573100.txt                                                                         |           |                   |         |                     |                     |
	| cp      | ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:24 UTC | 07 Apr 25 13:25 UTC |
	|         | ha-573100-m02:/home/docker/cp-test_ha-573100-m04_ha-573100-m02.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n                                                                                                         | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:25 UTC | 07 Apr 25 13:25 UTC |
	|         | ha-573100-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n ha-573100-m02 sudo cat                                                                                  | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:25 UTC | 07 Apr 25 13:25 UTC |
	|         | /home/docker/cp-test_ha-573100-m04_ha-573100-m02.txt                                                                     |           |                   |         |                     |                     |
	| cp      | ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt                                                                      | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:25 UTC | 07 Apr 25 13:25 UTC |
	|         | ha-573100-m03:/home/docker/cp-test_ha-573100-m04_ha-573100-m03.txt                                                       |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n                                                                                                         | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:25 UTC | 07 Apr 25 13:25 UTC |
	|         | ha-573100-m04 sudo cat                                                                                                   |           |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |           |                   |         |                     |                     |
	| ssh     | ha-573100 ssh -n ha-573100-m03 sudo cat                                                                                  | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:25 UTC | 07 Apr 25 13:25 UTC |
	|         | /home/docker/cp-test_ha-573100-m04_ha-573100-m03.txt                                                                     |           |                   |         |                     |                     |
	| node    | ha-573100 node stop m02 -v=7                                                                                             | ha-573100 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:25 UTC | 07 Apr 25 13:26 UTC |
	|         | --alsologtostderr                                                                                                        |           |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|-----------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:56:56
	Running on machine: minikube3
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:56:56.656239    7088 out.go:345] Setting OutFile to fd 1476 ...
	I0407 12:56:56.735608    7088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:56.735608    7088 out.go:358] Setting ErrFile to fd 1632...
	I0407 12:56:56.735608    7088 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:56:56.755799    7088 out.go:352] Setting JSON to false
	I0407 12:56:56.758800    7088 start.go:129] hostinfo: {"hostname":"minikube3","uptime":2409,"bootTime":1744028207,"procs":179,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 12:56:56.759802    7088 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 12:56:56.768668    7088 out.go:177] * [ha-573100] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 12:56:56.772086    7088 notify.go:220] Checking for updates...
	I0407 12:56:56.775737    7088 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 12:56:56.778615    7088 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:56:56.781592    7088 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 12:56:56.784776    7088 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:56:56.787572    7088 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:56:56.790068    7088 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:57:02.018232    7088 out.go:177] * Using the hyperv driver based on user configuration
	I0407 12:57:02.022340    7088 start.go:297] selected driver: hyperv
	I0407 12:57:02.022340    7088 start.go:901] validating driver "hyperv" against <nil>
	I0407 12:57:02.022340    7088 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 12:57:02.069649    7088 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:57:02.071468    7088 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 12:57:02.071617    7088 cni.go:84] Creating CNI manager for ""
	I0407 12:57:02.071691    7088 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0407 12:57:02.071691    7088 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0407 12:57:02.071880    7088 start.go:340] cluster config:
	{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker
CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:57:02.072172    7088 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:57:02.076951    7088 out.go:177] * Starting "ha-573100" primary control-plane node in "ha-573100" cluster
	I0407 12:57:02.079104    7088 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:57:02.079632    7088 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 12:57:02.079632    7088 cache.go:56] Caching tarball of preloaded images
	I0407 12:57:02.079882    7088 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 12:57:02.079882    7088 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 12:57:02.080979    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 12:57:02.080979    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json: {Name:mkee596f205fc528f696d7e985c07299fecd44dd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:57:02.082308    7088 start.go:360] acquireMachinesLock for ha-573100: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 12:57:02.082308    7088 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-573100"
	I0407 12:57:02.082308    7088 start.go:93] Provisioning new machine with config: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-573100 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID
:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 12:57:02.082308    7088 start.go:125] createHost starting for "" (driver="hyperv")
	I0407 12:57:02.086455    7088 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 12:57:02.086455    7088 start.go:159] libmachine.API.Create for "ha-573100" (driver="hyperv")
	I0407 12:57:02.086455    7088 client.go:168] LocalClient.Create starting
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Parsing certificate...
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 12:57:02.087506    7088 main.go:141] libmachine: Parsing certificate...
	I0407 12:57:02.087506    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 12:57:04.112859    7088 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 12:57:04.113042    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:04.113111    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 12:57:05.726263    7088 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 12:57:05.727089    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:05.727089    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 12:57:07.154218    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 12:57:07.154523    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:07.154523    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 12:57:10.589998    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 12:57:10.589998    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:10.592051    7088 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 12:57:11.106028    7088 main.go:141] libmachine: Creating SSH key...
	I0407 12:57:11.374353    7088 main.go:141] libmachine: Creating VM...
	I0407 12:57:11.374353    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 12:57:14.119317    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 12:57:14.119381    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:14.119441    7088 main.go:141] libmachine: Using switch "Default Switch"
	I0407 12:57:14.119502    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 12:57:15.846969    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 12:57:15.847147    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:15.847205    7088 main.go:141] libmachine: Creating VHD
	I0407 12:57:15.847205    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 12:57:19.573858    7088 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : E1DD200D-A45E-4B28-A627-5E3F3FBE7F93
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 12:57:19.573858    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:19.574146    7088 main.go:141] libmachine: Writing magic tar header
	I0407 12:57:19.574264    7088 main.go:141] libmachine: Writing SSH key tar header
	I0407 12:57:19.587836    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 12:57:22.694128    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:22.694567    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:22.694567    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\disk.vhd' -SizeBytes 20000MB
	I0407 12:57:25.212818    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:25.213338    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:25.213412    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-573100 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0407 12:57:28.719478    7088 main.go:141] libmachine: [stdout =====>] : 
	Name      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----      ----- ----------- ----------------- ------   ------             -------
	ha-573100 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 12:57:28.719527    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:28.719527    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-573100 -DynamicMemoryEnabled $false
	I0407 12:57:30.946657    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:30.946912    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:30.946912    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-573100 -Count 2
	I0407 12:57:33.077863    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:33.077863    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:33.077863    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-573100 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\boot2docker.iso'
	I0407 12:57:35.632234    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:35.632234    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:35.632234    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-573100 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\disk.vhd'
	I0407 12:57:38.181883    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:38.181883    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:38.181883    7088 main.go:141] libmachine: Starting VM...
	I0407 12:57:38.181955    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-573100
	I0407 12:57:41.204491    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:41.204491    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:41.205063    7088 main.go:141] libmachine: Waiting for host to start...
	I0407 12:57:41.205063    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:57:43.431004    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:57:43.431395    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:43.431606    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:57:45.920193    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:45.920193    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:46.921532    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:57:49.099498    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:57:49.100524    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:49.100524    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:57:51.579432    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:51.579518    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:52.580131    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:57:54.732864    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:57:54.732864    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:54.733266    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:57:57.277325    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:57:57.277325    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:57:58.278890    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:00.458453    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:00.458453    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:00.458779    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:02.953414    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 12:58:02.953414    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:03.953667    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:06.160201    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:06.160201    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:06.160427    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:08.683628    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:08.684366    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:08.684492    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:10.784582    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:10.785357    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:10.785408    7088 machine.go:93] provisionDockerMachine start ...
	I0407 12:58:10.785408    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:12.887155    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:12.887155    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:12.887657    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:15.337513    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:15.337513    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:15.343727    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:15.359815    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:15.359815    7088 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 12:58:15.488436    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 12:58:15.488556    7088 buildroot.go:166] provisioning hostname "ha-573100"
	I0407 12:58:15.488683    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:17.580405    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:17.580946    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:17.580946    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:19.992826    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:19.992958    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:19.997864    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:19.998596    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:19.998596    7088 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-573100 && echo "ha-573100" | sudo tee /etc/hostname
	I0407 12:58:20.161609    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-573100
	
	I0407 12:58:20.161609    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:22.240077    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:22.240077    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:22.240873    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:24.682344    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:24.682344    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:24.688821    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:24.689564    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:24.689564    7088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-573100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-573100/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-573100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 12:58:24.843318    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 12:58:24.843318    7088 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 12:58:24.843318    7088 buildroot.go:174] setting up certificates
	I0407 12:58:24.843318    7088 provision.go:84] configureAuth start
	I0407 12:58:24.843318    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:26.895365    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:26.895365    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:26.896258    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:29.342609    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:29.342870    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:29.342870    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:31.410057    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:31.410146    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:31.410146    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:33.899865    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:33.900440    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:33.900519    7088 provision.go:143] copyHostCerts
	I0407 12:58:33.900519    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 12:58:33.901257    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 12:58:33.901257    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 12:58:33.901257    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 12:58:33.902518    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 12:58:33.903166    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 12:58:33.903210    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 12:58:33.903601    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 12:58:33.906270    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 12:58:33.906638    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 12:58:33.906681    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 12:58:33.907041    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 12:58:33.908163    7088 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-573100 san=[127.0.0.1 172.17.95.223 ha-573100 localhost minikube]
	I0407 12:58:34.284036    7088 provision.go:177] copyRemoteCerts
	I0407 12:58:34.296897    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 12:58:34.296897    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:36.307926    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:36.307981    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:36.308066    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:38.820032    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:38.820032    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:38.820975    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 12:58:38.923780    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.6268653s)
	I0407 12:58:38.923780    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 12:58:38.924346    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 12:58:38.966386    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 12:58:38.966386    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1196 bytes)
	I0407 12:58:39.010525    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 12:58:39.010525    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 12:58:39.056348    7088 provision.go:87] duration metric: took 14.2129735s to configureAuth
	I0407 12:58:39.056348    7088 buildroot.go:189] setting minikube options for container-runtime
	I0407 12:58:39.056979    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:58:39.056979    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:41.157468    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:41.157468    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:41.157569    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:43.588373    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:43.588970    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:43.594044    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:43.595149    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:43.595149    7088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 12:58:43.718879    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 12:58:43.718879    7088 buildroot.go:70] root file system type: tmpfs
	I0407 12:58:43.719860    7088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 12:58:43.719860    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:45.744799    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:45.745800    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:45.745800    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:48.151778    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:48.151778    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:48.157643    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:48.158263    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:48.158852    7088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 12:58:48.322038    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 12:58:48.322681    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:50.387974    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:50.388228    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:50.388228    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:52.832791    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:52.833817    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:52.838799    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:58:52.839060    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:58:52.839060    7088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 12:58:55.024916    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 12:58:55.024916    7088 machine.go:96] duration metric: took 44.23933s to provisionDockerMachine
	I0407 12:58:55.024916    7088 client.go:171] duration metric: took 1m52.9380074s to LocalClient.Create
	I0407 12:58:55.024916    7088 start.go:167] duration metric: took 1m52.9380074s to libmachine.API.Create "ha-573100"
	I0407 12:58:55.024916    7088 start.go:293] postStartSetup for "ha-573100" (driver="hyperv")
	I0407 12:58:55.024916    7088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 12:58:55.037839    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 12:58:55.038353    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:58:57.111631    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:58:57.112451    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:57.112621    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:58:59.543898    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:58:59.544967    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:58:59.545264    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 12:58:59.660093    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6222349s)
	I0407 12:58:59.671846    7088 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 12:58:59.681362    7088 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 12:58:59.681503    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 12:58:59.682368    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 12:58:59.683827    7088 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 12:58:59.683919    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 12:58:59.695801    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 12:58:59.712595    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 12:58:59.756674    7088 start.go:296] duration metric: took 4.7317392s for postStartSetup
	I0407 12:58:59.759800    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:01.805263    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:01.805263    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:01.805263    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:04.255011    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:04.255196    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:04.255196    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 12:59:04.258180    7088 start.go:128] duration metric: took 2m2.1753813s to createHost
	I0407 12:59:04.258254    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:06.315549    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:06.315549    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:06.316582    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:08.867920    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:08.867920    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:08.873756    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:59:08.874511    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:59:08.874511    7088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 12:59:09.016590    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744030749.029228276
	
	I0407 12:59:09.016665    7088 fix.go:216] guest clock: 1744030749.029228276
	I0407 12:59:09.016665    7088 fix.go:229] Guest: 2025-04-07 12:59:09.029228276 +0000 UTC Remote: 2025-04-07 12:59:04.258254 +0000 UTC m=+127.699871101 (delta=4.770974276s)
	I0407 12:59:09.016803    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:11.130936    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:11.131960    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:11.131960    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:13.631084    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:13.631084    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:13.638672    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 12:59:13.639427    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.95.223 22 <nil> <nil>}
	I0407 12:59:13.639427    7088 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744030749
	I0407 12:59:13.793259    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 12:59:09 UTC 2025
	
	I0407 12:59:13.793316    7088 fix.go:236] clock set: Mon Apr  7 12:59:09 UTC 2025
	 (err=<nil>)
	I0407 12:59:13.793373    7088 start.go:83] releasing machines lock for "ha-573100", held for 2m11.7105353s
	I0407 12:59:13.793684    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:15.913439    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:15.913898    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:15.913898    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:18.358112    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:18.358112    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:18.362986    7088 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 12:59:18.362986    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:18.372488    7088 ssh_runner.go:195] Run: cat /version.json
	I0407 12:59:18.372488    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 12:59:20.548439    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:20.548886    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 12:59:20.548886    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:20.548886    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:20.548886    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:20.549086    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 12:59:23.189509    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:23.189509    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:23.189751    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 12:59:23.209875    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 12:59:23.209875    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 12:59:23.209875    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 12:59:23.289922    7088 ssh_runner.go:235] Completed: cat /version.json: (4.9174134s)
	I0407 12:59:23.300131    7088 ssh_runner.go:195] Run: systemctl --version
	I0407 12:59:23.305113    7088 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9421064s)
	W0407 12:59:23.305113    7088 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 12:59:23.321796    7088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 12:59:23.330868    7088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 12:59:23.340640    7088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 12:59:23.366008    7088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 12:59:23.366008    7088 start.go:495] detecting cgroup driver to use...
	I0407 12:59:23.366094    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:59:23.413088    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 12:59:23.442041    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 12:59:23.460440    7088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	W0407 12:59:23.463963    7088 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 12:59:23.463963    7088 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 12:59:23.472444    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 12:59:23.506793    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:59:23.536110    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 12:59:23.564748    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 12:59:23.594692    7088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 12:59:23.627833    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 12:59:23.656081    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 12:59:23.685167    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 12:59:23.713037    7088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 12:59:23.729645    7088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 12:59:23.740804    7088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 12:59:23.769828    7088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 12:59:23.794874    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:23.966703    7088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 12:59:23.993450    7088 start.go:495] detecting cgroup driver to use...
	I0407 12:59:24.004708    7088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 12:59:24.041488    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 12:59:24.077912    7088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 12:59:24.115551    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 12:59:24.149725    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 12:59:24.182069    7088 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 12:59:24.241507    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 12:59:24.265908    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 12:59:24.312108    7088 ssh_runner.go:195] Run: which cri-dockerd
	I0407 12:59:24.328142    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 12:59:24.347177    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 12:59:24.387877    7088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 12:59:24.576285    7088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 12:59:24.757301    7088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 12:59:24.757524    7088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 12:59:24.800260    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:24.981084    7088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 12:59:27.550187    7088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5689591s)
	I0407 12:59:27.561136    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 12:59:27.594889    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:59:27.629182    7088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 12:59:27.812832    7088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 12:59:27.990048    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:28.172135    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 12:59:28.213110    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 12:59:28.246922    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:28.434067    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 12:59:28.528385    7088 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 12:59:28.538278    7088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 12:59:28.547059    7088 start.go:563] Will wait 60s for crictl version
	I0407 12:59:28.557499    7088 ssh_runner.go:195] Run: which crictl
	I0407 12:59:28.572978    7088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 12:59:28.621588    7088 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 12:59:28.629647    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 12:59:28.673236    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 12:59:28.713048    7088 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 12:59:28.713234    7088 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 12:59:28.717774    7088 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 12:59:28.717774    7088 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 12:59:28.717774    7088 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 12:59:28.717774    7088 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 12:59:28.720002    7088 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 12:59:28.721071    7088 ip.go:214] interface addr: 172.17.80.1/20
	I0407 12:59:28.731720    7088 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 12:59:28.736346    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:59:28.769852    7088 kubeadm.go:883] updating cluster {Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespac
e:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 12:59:28.769852    7088 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:59:28.777251    7088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 12:59:28.802686    7088 docker.go:689] Got preloaded images: 
	I0407 12:59:28.802744    7088 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0407 12:59:28.813455    7088 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0407 12:59:28.841947    7088 ssh_runner.go:195] Run: which lz4
	I0407 12:59:28.848687    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0407 12:59:28.858634    7088 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 12:59:28.865059    7088 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 12:59:28.865089    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0407 12:59:30.745515    7088 docker.go:653] duration metric: took 1.8964218s to copy over tarball
	I0407 12:59:30.755951    7088 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 12:59:39.699486    7088 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.9434985s)
	I0407 12:59:39.699486    7088 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 12:59:39.761441    7088 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0407 12:59:39.779545    7088 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0407 12:59:39.821149    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:40.027982    7088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 12:59:43.098176    7088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.0701815s)
	I0407 12:59:43.106156    7088 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 12:59:43.134156    7088 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 12:59:43.134156    7088 cache_images.go:84] Images are preloaded, skipping loading
	I0407 12:59:43.134156    7088 kubeadm.go:934] updating node { 172.17.95.223 8443 v1.32.2 docker true true} ...
	I0407 12:59:43.134156    7088 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-573100 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.95.223
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 12:59:43.143699    7088 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 12:59:43.206593    7088 cni.go:84] Creating CNI manager for ""
	I0407 12:59:43.206593    7088 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0407 12:59:43.206593    7088 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 12:59:43.206593    7088 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.95.223 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-573100 NodeName:ha-573100 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.95.223"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.95.223 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes
/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 12:59:43.206593    7088 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.95.223
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-573100"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.17.95.223"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.95.223"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 12:59:43.206593    7088 kube-vip.go:115] generating kube-vip config ...
	I0407 12:59:43.218023    7088 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0407 12:59:43.240974    7088 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0407 12:59:43.241291    7088 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0407 12:59:43.251799    7088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 12:59:43.266720    7088 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 12:59:43.276873    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0407 12:59:43.294245    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0407 12:59:43.321785    7088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 12:59:43.348443    7088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2290 bytes)
	I0407 12:59:43.377702    7088 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1447 bytes)
	I0407 12:59:43.413733    7088 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0407 12:59:43.419693    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 12:59:43.447914    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 12:59:43.618339    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 12:59:43.642986    7088 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100 for IP: 172.17.95.223
	I0407 12:59:43.643113    7088 certs.go:194] generating shared ca certs ...
	I0407 12:59:43.643179    7088 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.643429    7088 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 12:59:43.644248    7088 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 12:59:43.644248    7088 certs.go:256] generating profile certs ...
	I0407 12:59:43.645333    7088 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key
	I0407 12:59:43.645333    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.crt with IP's: []
	I0407 12:59:43.804329    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.crt ...
	I0407 12:59:43.804329    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.crt: {Name:mk21bbd0c664861c0fe2438c1431a34ed5a9b4df Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.806166    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key ...
	I0407 12:59:43.806166    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key: {Name:mkfe6f6525a808b66b9dafe2a6932dc7a7cbf405 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.806982    7088 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.5c786bb7
	I0407 12:59:43.807949    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.5c786bb7 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.95.223 172.17.95.254]
	I0407 12:59:43.907294    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.5c786bb7 ...
	I0407 12:59:43.907294    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.5c786bb7: {Name:mk0efb6b0c51f2e14af56446225c8d2570bd23db Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.909083    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.5c786bb7 ...
	I0407 12:59:43.909083    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.5c786bb7: {Name:mk60b5c7c5a6b211d5fb373ebfb305898b65796a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:43.910181    7088 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.5c786bb7 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt
	I0407 12:59:43.925096    7088 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.5c786bb7 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key
	I0407 12:59:43.926087    7088 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key
	I0407 12:59:43.926087    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt with IP's: []
	I0407 12:59:44.142211    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt ...
	I0407 12:59:44.142211    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt: {Name:mk7df96e0f2dd05b3d9e0078537809f03b142a8f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:44.143085    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key ...
	I0407 12:59:44.143085    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key: {Name:mk3ac6e8ed7073461261aeace881e163508e3bfa Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:59:44.144520    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 12:59:44.145061    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 12:59:44.145279    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 12:59:44.145444    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 12:59:44.145585    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0407 12:59:44.145618    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0407 12:59:44.145948    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0407 12:59:44.158852    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0407 12:59:44.160460    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 12:59:44.160997    7088 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 12:59:44.161116    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 12:59:44.161116    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 12:59:44.161714    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 12:59:44.162103    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 12:59:44.162344    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 12:59:44.162344    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:59:44.163063    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 12:59:44.163215    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 12:59:44.164368    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 12:59:44.206873    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 12:59:44.252029    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 12:59:44.294053    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 12:59:44.338456    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0407 12:59:44.381342    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 12:59:44.423351    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 12:59:44.465650    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 12:59:44.511069    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 12:59:44.553033    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 12:59:44.592421    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 12:59:44.635601    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 12:59:44.670389    7088 ssh_runner.go:195] Run: openssl version
	I0407 12:59:44.692726    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 12:59:44.725321    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:59:44.732486    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:59:44.744337    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 12:59:44.761900    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 12:59:44.789885    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 12:59:44.821332    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 12:59:44.828611    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 12:59:44.838487    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 12:59:44.860192    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 12:59:44.888302    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 12:59:44.915794    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 12:59:44.922634    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 12:59:44.932966    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 12:59:44.950247    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 12:59:44.977276    7088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 12:59:44.984360    7088 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 12:59:44.984706    7088 kubeadm.go:392] StartCluster: {Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:d
efault APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:59:44.992570    7088 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 12:59:45.022692    7088 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 12:59:45.057904    7088 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 12:59:45.087874    7088 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 12:59:45.108100    7088 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 12:59:45.108172    7088 kubeadm.go:157] found existing configuration files:
	
	I0407 12:59:45.121242    7088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 12:59:45.142498    7088 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 12:59:45.153537    7088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 12:59:45.181630    7088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 12:59:45.197479    7088 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 12:59:45.206702    7088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 12:59:45.237301    7088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 12:59:45.253432    7088 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 12:59:45.264081    7088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 12:59:45.292068    7088 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 12:59:45.309360    7088 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 12:59:45.318662    7088 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 12:59:45.336726    7088 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 12:59:45.733165    7088 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 12:59:59.799619    7088 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 12:59:59.799774    7088 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 12:59:59.799893    7088 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 12:59:59.800145    7088 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 12:59:59.800427    7088 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 12:59:59.800427    7088 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 12:59:59.803968    7088 out.go:235]   - Generating certificates and keys ...
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 12:59:59.804310    7088 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 12:59:59.804954    7088 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 12:59:59.805066    7088 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 12:59:59.805066    7088 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [ha-573100 localhost] and IPs [172.17.95.223 127.0.0.1 ::1]
	I0407 12:59:59.805066    7088 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 12:59:59.805710    7088 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [ha-573100 localhost] and IPs [172.17.95.223 127.0.0.1 ::1]
	I0407 12:59:59.805806    7088 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 12:59:59.806060    7088 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 12:59:59.806129    7088 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 12:59:59.806292    7088 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 12:59:59.806452    7088 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 12:59:59.806589    7088 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 12:59:59.806767    7088 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 12:59:59.806767    7088 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 12:59:59.806767    7088 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 12:59:59.807296    7088 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 12:59:59.807467    7088 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 12:59:59.811495    7088 out.go:235]   - Booting up control plane ...
	I0407 12:59:59.811541    7088 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 12:59:59.811541    7088 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 12:59:59.811541    7088 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 12:59:59.812308    7088 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 12:59:59.812520    7088 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 12:59:59.812520    7088 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 12:59:59.812880    7088 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 12:59:59.812880    7088 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 12:59:59.812880    7088 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 501.479913ms
	I0407 12:59:59.813448    7088 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 12:59:59.813639    7088 kubeadm.go:310] [api-check] The API server is healthy after 8.001794118s
	I0407 12:59:59.813639    7088 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 12:59:59.813639    7088 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 12:59:59.813639    7088 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 12:59:59.813639    7088 kubeadm.go:310] [mark-control-plane] Marking the node ha-573100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 12:59:59.813639    7088 kubeadm.go:310] [bootstrap-token] Using token: szigwj.nfxg52i168tpi7cc
	I0407 12:59:59.821683    7088 out.go:235]   - Configuring RBAC rules ...
	I0407 12:59:59.821683    7088 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 12:59:59.821683    7088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 12:59:59.821683    7088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 12:59:59.822620    7088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 12:59:59.822620    7088 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 12:59:59.822620    7088 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 12:59:59.822620    7088 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 12:59:59.822620    7088 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 12:59:59.822620    7088 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.823575    7088 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.823575    7088 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.823575    7088 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 12:59:59.823575    7088 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 12:59:59.823575    7088 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.823575    7088 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 12:59:59.823575    7088 kubeadm.go:310] 
	I0407 12:59:59.824588    7088 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 12:59:59.824588    7088 kubeadm.go:310] 
	I0407 12:59:59.824588    7088 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 12:59:59.824588    7088 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 12:59:59.824588    7088 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 12:59:59.824588    7088 kubeadm.go:310] 
	I0407 12:59:59.824588    7088 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 12:59:59.824588    7088 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 12:59:59.824588    7088 kubeadm.go:310] 
	I0407 12:59:59.825618    7088 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token szigwj.nfxg52i168tpi7cc \
	I0407 12:59:59.825618    7088 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 \
	I0407 12:59:59.825618    7088 kubeadm.go:310] 	--control-plane 
	I0407 12:59:59.825618    7088 kubeadm.go:310] 
	I0407 12:59:59.825618    7088 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 12:59:59.825618    7088 kubeadm.go:310] 
	I0407 12:59:59.825618    7088 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token szigwj.nfxg52i168tpi7cc \
	I0407 12:59:59.825618    7088 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 
	I0407 12:59:59.825618    7088 cni.go:84] Creating CNI manager for ""
	I0407 12:59:59.826618    7088 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0407 12:59:59.829042    7088 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0407 12:59:59.844735    7088 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0407 12:59:59.852820    7088 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0407 12:59:59.852820    7088 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0407 12:59:59.897638    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0407 13:00:00.580762    7088 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 13:00:00.594427    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:00.595428    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-573100 minikube.k8s.io/updated_at=2025_04_07T13_00_00_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=ha-573100 minikube.k8s.io/primary=true
	I0407 13:00:00.622267    7088 ops.go:34] apiserver oom_adj: -16
	I0407 13:00:00.812514    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:01.312289    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:01.813767    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:02.310642    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:02.812313    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:03.313080    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:03.813190    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 13:00:03.953702    7088 kubeadm.go:1113] duration metric: took 3.3727954s to wait for elevateKubeSystemPrivileges
	I0407 13:00:03.953702    7088 kubeadm.go:394] duration metric: took 18.9689181s to StartCluster
	I0407 13:00:03.953702    7088 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:00:03.953702    7088 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 13:00:03.955898    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:00:03.957157    7088 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:00:03.957258    7088 start.go:241] waiting for startup goroutines ...
	I0407 13:00:03.957360    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 13:00:03.957258    7088 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 13:00:03.957584    7088 addons.go:69] Setting default-storageclass=true in profile "ha-573100"
	I0407 13:00:03.957584    7088 addons.go:69] Setting storage-provisioner=true in profile "ha-573100"
	I0407 13:00:03.957703    7088 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-573100"
	I0407 13:00:03.957703    7088 addons.go:238] Setting addon storage-provisioner=true in "ha-573100"
	I0407 13:00:03.957703    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:00:03.957853    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:00:03.958173    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:03.959195    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:04.144719    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 13:00:04.639000    7088 start.go:971] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0407 13:00:06.312339    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:06.312555    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:06.315447    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:06.315531    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:06.315960    7088 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:00:06.316424    7088 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 13:00:06.317476    7088 kapi.go:59] client config for ha-573100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 13:00:06.319008    7088 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:00:06.319008    7088 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 13:00:06.319072    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:06.319072    7088 cert_rotation.go:140] Starting client certificate rotation controller
	I0407 13:00:06.319072    7088 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0407 13:00:06.319072    7088 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0407 13:00:06.319072    7088 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0407 13:00:06.319072    7088 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0407 13:00:06.319859    7088 addons.go:238] Setting addon default-storageclass=true in "ha-573100"
	I0407 13:00:06.319967    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:00:06.321125    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:08.710776    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:08.710776    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:08.710964    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:00:08.782925    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:08.783139    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:08.783206    7088 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 13:00:08.783278    7088 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 13:00:08.783381    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:00:11.067757    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:11.067757    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:11.067897    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:00:11.447864    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:00:11.447864    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:11.447864    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:00:11.618298    7088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 13:00:13.688980    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:00:13.689872    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:13.690059    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:00:13.828462    7088 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 13:00:13.970527    7088 round_trippers.go:470] GET https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0407 13:00:13.971499    7088 round_trippers.go:476] Request Headers:
	I0407 13:00:13.971499    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:00:13.971499    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:00:13.984644    7088 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0407 13:00:13.985632    7088 round_trippers.go:470] PUT https://172.17.95.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0407 13:00:13.985632    7088 round_trippers.go:476] Request Headers:
	I0407 13:00:13.985632    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:00:13.985632    7088 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 13:00:13.985632    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:00:13.989947    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:00:13.992973    7088 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0407 13:00:13.999319    7088 addons.go:514] duration metric: took 10.04202s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0407 13:00:13.999420    7088 start.go:246] waiting for cluster config update ...
	I0407 13:00:13.999488    7088 start.go:255] writing updated cluster config ...
	I0407 13:00:14.003560    7088 out.go:201] 
	I0407 13:00:14.018164    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:00:14.018164    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:00:14.025188    7088 out.go:177] * Starting "ha-573100-m02" control-plane node in "ha-573100" cluster
	I0407 13:00:14.029200    7088 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:00:14.029200    7088 cache.go:56] Caching tarball of preloaded images
	I0407 13:00:14.029200    7088 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 13:00:14.029200    7088 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 13:00:14.030187    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:00:14.035191    7088 start.go:360] acquireMachinesLock for ha-573100-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:00:14.035191    7088 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-573100-m02"
	I0407 13:00:14.035191    7088 start.go:93] Provisioning new machine with config: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s M
ount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:00:14.035191    7088 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0407 13:00:14.039191    7088 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 13:00:14.039191    7088 start.go:159] libmachine.API.Create for "ha-573100" (driver="hyperv")
	I0407 13:00:14.039191    7088 client.go:168] LocalClient.Create starting
	I0407 13:00:14.040199    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 13:00:14.040199    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 13:00:14.040199    7088 main.go:141] libmachine: Parsing certificate...
	I0407 13:00:14.040199    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 13:00:14.041188    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 13:00:14.041188    7088 main.go:141] libmachine: Parsing certificate...
	I0407 13:00:14.041188    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 13:00:15.940822    7088 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 13:00:15.940822    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:15.940822    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 13:00:17.658612    7088 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 13:00:17.659074    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:17.659074    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:00:19.113110    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:00:19.113358    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:19.113358    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:00:22.642467    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:00:22.643579    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:22.646188    7088 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:00:23.161325    7088 main.go:141] libmachine: Creating SSH key...
	I0407 13:00:23.243457    7088 main.go:141] libmachine: Creating VM...
	I0407 13:00:23.243457    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:00:26.189500    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:00:26.189500    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:26.189500    7088 main.go:141] libmachine: Using switch "Default Switch"
	I0407 13:00:26.189500    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:00:27.974558    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:00:27.974558    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:27.975403    7088 main.go:141] libmachine: Creating VHD
	I0407 13:00:27.975403    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 13:00:31.811567    7088 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 3353EDDA-1498-4F5F-A6FB-869591EAB766
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 13:00:31.812463    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:31.812463    7088 main.go:141] libmachine: Writing magic tar header
	I0407 13:00:31.812463    7088 main.go:141] libmachine: Writing SSH key tar header
	I0407 13:00:31.829007    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 13:00:35.006191    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:35.006191    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:35.007169    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\disk.vhd' -SizeBytes 20000MB
	I0407 13:00:37.530661    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:37.530661    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:37.530661    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-573100-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0407 13:00:41.078585    7088 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-573100-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 13:00:41.078585    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:41.079284    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-573100-m02 -DynamicMemoryEnabled $false
	I0407 13:00:43.307325    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:43.307753    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:43.307753    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-573100-m02 -Count 2
	I0407 13:00:45.458801    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:45.458801    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:45.459336    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-573100-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\boot2docker.iso'
	I0407 13:00:48.010252    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:48.010252    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:48.010669    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-573100-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\disk.vhd'
	I0407 13:00:50.701895    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:50.702086    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:50.702086    7088 main.go:141] libmachine: Starting VM...
	I0407 13:00:50.702086    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-573100-m02
	I0407 13:00:53.719116    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:53.719116    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:53.719116    7088 main.go:141] libmachine: Waiting for host to start...
	I0407 13:00:53.720116    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:00:56.023235    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:00:56.023440    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:56.023584    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:00:58.651694    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:00:58.651694    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:00:59.652693    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:02.080278    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:02.080278    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:02.080278    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:04.706413    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:01:04.706413    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:05.706546    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:07.863083    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:07.863083    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:07.864085    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:10.365875    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:01:10.365875    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:11.366046    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:13.542100    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:13.542334    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:13.542334    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:16.097723    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:01:16.097723    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:17.098182    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:19.341818    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:19.342094    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:19.342094    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:21.900537    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:21.900537    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:21.901200    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:24.007251    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:24.007510    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:24.007617    7088 machine.go:93] provisionDockerMachine start ...
	I0407 13:01:24.007928    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:26.164984    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:26.164984    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:26.164984    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:28.672921    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:28.672921    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:28.678826    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:28.680191    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:01:28.680191    7088 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:01:28.810399    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 13:01:28.810459    7088 buildroot.go:166] provisioning hostname "ha-573100-m02"
	I0407 13:01:28.810519    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:30.928993    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:30.929305    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:30.929305    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:33.512685    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:33.512685    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:33.518374    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:33.519096    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:01:33.519096    7088 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-573100-m02 && echo "ha-573100-m02" | sudo tee /etc/hostname
	I0407 13:01:33.668344    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-573100-m02
	
	I0407 13:01:33.668406    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:35.778565    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:35.778565    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:35.778641    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:38.274066    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:38.274066    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:38.280678    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:38.280778    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:01:38.280778    7088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-573100-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-573100-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-573100-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:01:38.429919    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:01:38.429919    7088 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 13:01:38.429919    7088 buildroot.go:174] setting up certificates
	I0407 13:01:38.429919    7088 provision.go:84] configureAuth start
	I0407 13:01:38.429919    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:40.523263    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:40.523534    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:40.523534    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:42.986312    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:42.986312    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:42.987045    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:45.088779    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:45.089311    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:45.089408    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:47.563218    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:47.563218    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:47.563218    7088 provision.go:143] copyHostCerts
	I0407 13:01:47.563218    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 13:01:47.563218    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 13:01:47.563218    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 13:01:47.563218    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 13:01:47.563218    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 13:01:47.563218    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 13:01:47.563218    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 13:01:47.563218    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 13:01:47.563218    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 13:01:47.563218    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 13:01:47.563218    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 13:01:47.563218    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 13:01:47.568741    7088 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-573100-m02 san=[127.0.0.1 172.17.82.162 ha-573100-m02 localhost minikube]
	I0407 13:01:47.850562    7088 provision.go:177] copyRemoteCerts
	I0407 13:01:47.859512    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:01:47.860539    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:49.984246    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:49.984246    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:49.984246    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:52.492834    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:52.492834    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:52.493517    7088 sshutil.go:53] new ssh client: &{IP:172.17.82.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\id_rsa Username:docker}
	I0407 13:01:52.599357    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7398247s)
	I0407 13:01:52.599357    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 13:01:52.600433    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:01:52.658346    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 13:01:52.658888    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 13:01:52.704286    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 13:01:52.704836    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:01:52.746925    7088 provision.go:87] duration metric: took 14.3169455s to configureAuth
	I0407 13:01:52.746925    7088 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:01:52.747846    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:01:52.747896    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:54.827454    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:54.827668    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:54.827668    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:01:57.360113    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:01:57.360113    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:57.366106    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:01:57.367083    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:01:57.367233    7088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 13:01:57.498793    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 13:01:57.498866    7088 buildroot.go:70] root file system type: tmpfs
	I0407 13:01:57.499078    7088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 13:01:57.499154    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:01:59.590944    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:01:59.590944    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:01:59.591792    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:02.107965    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:02.107965    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:02.113686    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:02:02.114412    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:02:02.114412    7088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.95.223"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 13:02:02.259759    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.95.223
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 13:02:02.260479    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:04.331421    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:04.331421    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:04.331525    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:06.821254    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:06.821254    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:06.829042    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:02:06.829783    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:02:06.829783    7088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 13:02:08.981285    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 13:02:08.981285    7088 machine.go:96] duration metric: took 44.9733808s to provisionDockerMachine
	I0407 13:02:08.981285    7088 client.go:171] duration metric: took 1m54.9416106s to LocalClient.Create
	I0407 13:02:08.981285    7088 start.go:167] duration metric: took 1m54.9416106s to libmachine.API.Create "ha-573100"
	I0407 13:02:08.981285    7088 start.go:293] postStartSetup for "ha-573100-m02" (driver="hyperv")
	I0407 13:02:08.981285    7088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:02:08.995671    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:02:08.995671    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:11.078501    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:11.078773    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:11.078773    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:13.528362    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:13.528362    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:13.528362    7088 sshutil.go:53] new ssh client: &{IP:172.17.82.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\id_rsa Username:docker}
	I0407 13:02:13.632499    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.6368075s)
	I0407 13:02:13.643947    7088 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:02:13.650705    7088 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:02:13.650705    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 13:02:13.650705    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 13:02:13.652373    7088 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 13:02:13.652414    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 13:02:13.663164    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:02:13.681766    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 13:02:13.732675    7088 start.go:296] duration metric: took 4.7513692s for postStartSetup
	I0407 13:02:13.735399    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:15.845304    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:15.845304    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:15.845304    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:18.335785    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:18.335785    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:18.336508    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:02:18.338927    7088 start.go:128] duration metric: took 2m4.3032123s to createHost
	I0407 13:02:18.339050    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:20.429546    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:20.430373    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:20.430373    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:22.928925    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:22.928999    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:22.934759    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:02:22.935274    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:02:22.935274    7088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:02:23.060239    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744030943.072476964
	
	I0407 13:02:23.060239    7088 fix.go:216] guest clock: 1744030943.072476964
	I0407 13:02:23.060320    7088 fix.go:229] Guest: 2025-04-07 13:02:23.072476964 +0000 UTC Remote: 2025-04-07 13:02:18.3389272 +0000 UTC m=+321.779734301 (delta=4.733549764s)
	I0407 13:02:23.060365    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:25.137366    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:25.137366    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:25.138358    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:27.682992    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:27.683821    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:27.689762    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:02:27.690468    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.162 22 <nil> <nil>}
	I0407 13:02:27.690468    7088 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744030943
	I0407 13:02:27.832219    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 13:02:23 UTC 2025
	
	I0407 13:02:27.832280    7088 fix.go:236] clock set: Mon Apr  7 13:02:23 UTC 2025
	 (err=<nil>)
	I0407 13:02:27.832280    7088 start.go:83] releasing machines lock for "ha-573100-m02", held for 2m13.7965248s
	I0407 13:02:27.832576    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:29.931800    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:29.931800    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:29.932004    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:32.456589    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:32.456589    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:32.460486    7088 out.go:177] * Found network options:
	I0407 13:02:32.463067    7088 out.go:177]   - NO_PROXY=172.17.95.223
	W0407 13:02:32.466137    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 13:02:32.468572    7088 out.go:177]   - NO_PROXY=172.17.95.223
	W0407 13:02:32.471057    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:02:32.472088    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 13:02:32.473615    7088 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 13:02:32.474660    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:32.482717    7088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:02:32.482717    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m02 ).state
	I0407 13:02:34.722583    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:34.722583    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:34.722583    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:34.722583    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:34.722832    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:34.722892    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:37.393218    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:37.394071    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:37.394284    7088 sshutil.go:53] new ssh client: &{IP:172.17.82.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\id_rsa Username:docker}
	I0407 13:02:37.421438    7088 main.go:141] libmachine: [stdout =====>] : 172.17.82.162
	
	I0407 13:02:37.421438    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:37.421670    7088 sshutil.go:53] new ssh client: &{IP:172.17.82.162 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m02\id_rsa Username:docker}
	I0407 13:02:37.494550    7088 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0118105s)
	W0407 13:02:37.494631    7088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:02:37.505544    7088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:02:37.506448    7088 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.0328109s)
	W0407 13:02:37.506448    7088 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 13:02:37.535745    7088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:02:37.535745    7088 start.go:495] detecting cgroup driver to use...
	I0407 13:02:37.535745    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:02:37.581687    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 13:02:37.613493    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:02:37.632362    7088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	W0407 13:02:37.643821    7088 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 13:02:37.643821    7088 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 13:02:37.645834    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:02:37.673979    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:02:37.704730    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:02:37.735391    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:02:37.765119    7088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:02:37.801669    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:02:37.834523    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 13:02:37.865589    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 13:02:37.896999    7088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:02:37.915906    7088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:02:37.927317    7088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:02:37.957524    7088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:02:37.983388    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:38.185390    7088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:02:38.214407    7088 start.go:495] detecting cgroup driver to use...
	I0407 13:02:38.228213    7088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 13:02:38.264017    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:02:38.297812    7088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:02:38.341407    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:02:38.377900    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:02:38.409708    7088 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 13:02:38.474246    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:02:38.502511    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:02:38.548886    7088 ssh_runner.go:195] Run: which cri-dockerd
	I0407 13:02:38.565281    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 13:02:38.581511    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 13:02:38.618480    7088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 13:02:38.800982    7088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 13:02:38.977656    7088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 13:02:38.977656    7088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 13:02:39.026140    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:39.219102    7088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 13:02:41.781927    7088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.562738s)
	I0407 13:02:41.794022    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 13:02:41.835006    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:02:41.868958    7088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 13:02:42.061443    7088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 13:02:42.261178    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:42.474095    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 13:02:42.512513    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:02:42.548716    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:42.730861    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 13:02:42.831136    7088 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 13:02:42.842948    7088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 13:02:42.850956    7088 start.go:563] Will wait 60s for crictl version
	I0407 13:02:42.862897    7088 ssh_runner.go:195] Run: which crictl
	I0407 13:02:42.878548    7088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:02:42.927796    7088 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 13:02:42.936981    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:02:42.980964    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:02:43.017931    7088 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 13:02:43.021933    7088 out.go:177]   - env NO_PROXY=172.17.95.223
	I0407 13:02:43.024394    7088 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 13:02:43.030312    7088 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 13:02:43.030881    7088 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 13:02:43.030881    7088 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 13:02:43.030881    7088 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 13:02:43.035088    7088 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 13:02:43.035088    7088 ip.go:214] interface addr: 172.17.80.1/20
	I0407 13:02:43.047708    7088 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 13:02:43.053805    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:02:43.076747    7088 mustload.go:65] Loading cluster: ha-573100
	I0407 13:02:43.077393    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:02:43.078082    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:02:45.190200    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:45.190588    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:45.190588    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:02:45.191355    7088 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100 for IP: 172.17.82.162
	I0407 13:02:45.191355    7088 certs.go:194] generating shared ca certs ...
	I0407 13:02:45.191426    7088 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:02:45.192021    7088 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 13:02:45.192494    7088 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 13:02:45.192706    7088 certs.go:256] generating profile certs ...
	I0407 13:02:45.193446    7088 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key
	I0407 13:02:45.193643    7088 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.46166831
	I0407 13:02:45.193808    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.46166831 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.95.223 172.17.82.162 172.17.95.254]
	I0407 13:02:45.371560    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.46166831 ...
	I0407 13:02:45.371560    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.46166831: {Name:mkc8e38912772193e71c7d2f229115814f2aefe8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:02:45.373468    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.46166831 ...
	I0407 13:02:45.373468    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.46166831: {Name:mka4627968bd9ab0cbeec7ef9cb63578cf53bbb6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:02:45.374511    7088 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.46166831 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt
	I0407 13:02:45.390526    7088 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.46166831 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key
	I0407 13:02:45.391613    7088 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key
	I0407 13:02:45.391613    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 13:02:45.392693    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 13:02:45.392693    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 13:02:45.392693    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 13:02:45.392693    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0407 13:02:45.393354    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0407 13:02:45.393608    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0407 13:02:45.393608    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0407 13:02:45.394599    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 13:02:45.394599    7088 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 13:02:45.394599    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 13:02:45.394599    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 13:02:45.396193    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 13:02:45.396567    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 13:02:45.396773    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 13:02:45.397368    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 13:02:45.397592    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 13:02:45.397678    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:02:45.397678    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:02:47.494322    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:47.495299    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:47.495299    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:50.016729    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:02:50.018174    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:50.018361    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:02:50.115976    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0407 13:02:50.125246    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0407 13:02:50.164141    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0407 13:02:50.170851    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0407 13:02:50.201863    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0407 13:02:50.207966    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0407 13:02:50.236577    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0407 13:02:50.244216    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0407 13:02:50.282436    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0407 13:02:50.289447    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0407 13:02:50.327954    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0407 13:02:50.334630    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0407 13:02:50.356061    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:02:50.402453    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:02:50.446425    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:02:50.498396    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 13:02:50.549701    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0407 13:02:50.596602    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:02:50.643195    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:02:50.688188    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:02:50.732546    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 13:02:50.775318    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 13:02:50.818123    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:02:50.861170    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0407 13:02:50.890212    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0407 13:02:50.920775    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0407 13:02:50.951535    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0407 13:02:50.981806    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0407 13:02:51.011580    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0407 13:02:51.041971    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0407 13:02:51.082788    7088 ssh_runner.go:195] Run: openssl version
	I0407 13:02:51.104816    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 13:02:51.135729    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 13:02:51.142422    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 13:02:51.152622    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 13:02:51.171339    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 13:02:51.204158    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 13:02:51.235753    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 13:02:51.242214    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 13:02:51.252721    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 13:02:51.270890    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:02:51.302111    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:02:51.333303    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:02:51.340876    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:02:51.353994    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:02:51.372326    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:02:51.407105    7088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:02:51.413660    7088 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:02:51.414009    7088 kubeadm.go:934] updating node {m02 172.17.82.162 8443 v1.32.2 docker true true} ...
	I0407 13:02:51.414230    7088 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-573100-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.82.162
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:02:51.414230    7088 kube-vip.go:115] generating kube-vip config ...
	I0407 13:02:51.428789    7088 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0407 13:02:51.457118    7088 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0407 13:02:51.457207    7088 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0407 13:02:51.469254    7088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:02:51.485737    7088 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0407 13:02:51.498494    7088 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0407 13:02:51.523972    7088 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl
	I0407 13:02:51.523972    7088 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet
	I0407 13:02:51.523972    7088 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm
	I0407 13:02:52.560396    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 13:02:52.571412    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 13:02:52.581461    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0407 13:02:52.581985    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0407 13:02:52.724827    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 13:02:52.746817    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 13:02:52.755823    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0407 13:02:52.755823    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0407 13:02:52.909860    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:02:53.001347    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 13:02:53.016447    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 13:02:53.045051    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0407 13:02:53.045051    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0407 13:02:53.808058    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0407 13:02:53.824514    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0407 13:02:53.851717    7088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:02:53.879850    7088 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0407 13:02:53.920680    7088 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0407 13:02:53.929501    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:02:53.965723    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:02:54.162292    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:02:54.190163    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:02:54.191166    7088 start.go:317] joinCluster: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:def
ault APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:02:54.191166    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0407 13:02:54.191166    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:02:56.282443    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:02:56.282443    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:56.282443    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:02:58.868220    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:02:58.868991    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:02:58.869047    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:02:59.329516    7088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.1382295s)
	I0407 13:02:59.329577    7088 start.go:343] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:02:59.329638    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ldyb3b.tts1mdzavw5rgovt --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-573100-m02 --control-plane --apiserver-advertise-address=172.17.82.162 --apiserver-bind-port=8443"
	I0407 13:03:37.630029    7088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token ldyb3b.tts1mdzavw5rgovt --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-573100-m02 --control-plane --apiserver-advertise-address=172.17.82.162 --apiserver-bind-port=8443": (38.3001611s)
	I0407 13:03:37.630029    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0407 13:03:38.410967    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-573100-m02 minikube.k8s.io/updated_at=2025_04_07T13_03_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=ha-573100 minikube.k8s.io/primary=false
	I0407 13:03:38.626288    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-573100-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0407 13:03:38.778205    7088 start.go:319] duration metric: took 44.5868432s to joinCluster
	I0407 13:03:38.778448    7088 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:03:38.779315    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:03:38.780867    7088 out.go:177] * Verifying Kubernetes components...
	I0407 13:03:38.797710    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:03:39.191251    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:03:39.222139    7088 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 13:03:39.222745    7088 kapi.go:59] client config for ha-573100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0407 13:03:39.222893    7088 kubeadm.go:483] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.95.223:8443
	I0407 13:03:39.223770    7088 node_ready.go:35] waiting up to 6m0s for node "ha-573100-m02" to be "Ready" ...
	I0407 13:03:39.224220    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:39.224291    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:39.224291    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:39.224330    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:39.240495    7088 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0407 13:03:39.724673    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:39.724673    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:39.724673    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:39.724673    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:39.731344    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:40.224867    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:40.224867    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:40.224867    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:40.224867    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:40.235146    7088 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0407 13:03:40.724663    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:40.724663    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:40.724663    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:40.724663    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:40.731199    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:41.224903    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:41.224903    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:41.224903    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:41.224903    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:41.230455    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:41.230853    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:41.724984    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:41.724984    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:41.724984    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:41.724984    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:41.737965    7088 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0407 13:03:42.224373    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:42.224373    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:42.224373    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:42.224373    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:42.230850    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:42.724884    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:42.724884    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:42.724884    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:42.724884    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:42.730697    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:43.224496    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:43.224496    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:43.224496    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:43.224496    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:43.230220    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:43.724567    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:43.724567    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:43.724567    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:43.724567    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:43.840293    7088 round_trippers.go:581] Response Status: 200 OK in 114 milliseconds
	I0407 13:03:43.840293    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:44.225308    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:44.225308    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:44.225308    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:44.225308    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:44.231092    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:44.725053    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:44.725053    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:44.725053    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:44.725053    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:44.731159    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:45.224589    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:45.224589    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:45.224589    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:45.224589    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:45.234852    7088 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0407 13:03:45.724360    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:45.724360    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:45.724360    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:45.724360    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:45.731312    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:46.224857    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:46.224857    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:46.224857    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:46.224857    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:46.231270    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:46.231972    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:46.724530    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:46.724530    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:46.724530    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:46.724530    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:46.918652    7088 round_trippers.go:581] Response Status: 200 OK in 194 milliseconds
	I0407 13:03:47.224913    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:47.225032    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:47.225032    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:47.225032    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:47.230316    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:47.724558    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:47.724649    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:47.724649    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:47.724649    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:47.730407    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:48.224020    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:48.224020    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:48.224020    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:48.224020    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:48.228313    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:48.724215    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:48.724215    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:48.724215    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:48.724215    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:48.729646    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:48.730192    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:49.225511    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:49.225670    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:49.225670    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:49.225670    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:49.231240    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:49.724819    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:49.724819    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:49.724819    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:49.724819    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:49.730569    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:50.224903    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:50.224903    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:50.224977    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:50.224977    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:50.228992    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:50.724077    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:50.724077    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:50.724077    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:50.724077    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:50.730212    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:03:50.730745    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:51.225029    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:51.225029    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:51.225127    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:51.225127    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:51.230396    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:51.725125    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:51.725125    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:51.725125    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:51.725125    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:51.730490    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:52.225558    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:52.225558    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:52.225558    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:52.225558    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:52.230063    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:52.724627    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:52.724627    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:52.724627    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:52.724627    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:52.729627    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:53.224547    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:53.224547    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:53.224547    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:53.224547    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:53.230122    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:53.230337    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:53.724826    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:53.724826    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:53.724826    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:53.724826    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:53.730025    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:54.224833    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:54.224833    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:54.224833    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:54.224833    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:54.230051    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:54.724198    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:54.724198    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:54.724198    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:54.724198    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:54.729634    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:55.224107    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:55.224107    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:55.224107    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:55.224107    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:55.230028    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:55.230415    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:55.725497    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:55.725586    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:55.725586    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:55.725586    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:55.730261    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:56.224426    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:56.224426    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:56.224426    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:56.224426    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:56.230215    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:56.725186    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:56.725264    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:56.725291    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:56.725291    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:56.730536    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:57.224545    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:57.224620    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:57.224620    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:57.224620    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:57.229919    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:57.724634    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:57.724634    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:57.724634    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:57.724634    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:57.729921    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:57.729921    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:03:58.224392    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:58.224392    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:58.224392    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:58.224392    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:58.230185    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:58.725099    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:58.725099    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:58.725099    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:58.725099    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:58.729909    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:03:59.224233    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:59.224233    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:59.224233    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:59.224233    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:59.230005    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:59.724599    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:03:59.724599    7088 round_trippers.go:476] Request Headers:
	I0407 13:03:59.724599    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:03:59.724599    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:03:59.730287    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:03:59.731290    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:04:00.224725    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:00.224725    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:00.224725    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:00.224725    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:00.230502    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:00.725482    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:00.725482    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:00.725482    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:00.725482    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:00.731546    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:01.224145    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:01.224145    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:01.224145    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:01.224145    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:01.229773    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:01.724606    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:01.724606    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:01.724606    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:01.724606    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:01.729180    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:02.224906    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:02.224991    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:02.225057    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:02.225057    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:02.229692    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:02.230939    7088 node_ready.go:53] node "ha-573100-m02" has status "Ready":"False"
	I0407 13:04:02.724203    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:02.724203    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:02.724203    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:02.724203    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:02.730099    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:03.224698    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:03.224698    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.224698    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.224698    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.239038    7088 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0407 13:04:03.239436    7088 node_ready.go:49] node "ha-573100-m02" has status "Ready":"True"
	I0407 13:04:03.239492    7088 node_ready.go:38] duration metric: took 24.0154683s for node "ha-573100-m02" to be "Ready" ...
	I0407 13:04:03.239492    7088 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:04:03.239732    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:03.239732    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.239732    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.239795    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.252547    7088 round_trippers.go:581] Response Status: 200 OK in 12 milliseconds
	I0407 13:04:03.255541    7088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.255541    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-whpg2
	I0407 13:04:03.255541    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.255541    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.255541    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.264986    7088 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 13:04:03.265331    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.265331    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.265331    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.265331    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.277287    7088 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0407 13:04:03.277669    7088 pod_ready.go:93] pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.277669    7088 pod_ready.go:82] duration metric: took 22.1285ms for pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.277805    7088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.277941    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-z4nkw
	I0407 13:04:03.277941    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.277941    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.277941    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.284688    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:03.284810    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.284810    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.284810    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.284810    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.302456    7088 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0407 13:04:03.303491    7088 pod_ready.go:93] pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.303563    7088 pod_ready.go:82] duration metric: took 25.7577ms for pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.303563    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.303693    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100
	I0407 13:04:03.303733    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.303733    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.303733    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.324056    7088 round_trippers.go:581] Response Status: 200 OK in 20 milliseconds
	I0407 13:04:03.324101    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.324101    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.324101    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.324101    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.327994    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:04:03.328764    7088 pod_ready.go:93] pod "etcd-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.329117    7088 pod_ready.go:82] duration metric: took 25.5545ms for pod "etcd-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.329259    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.329259    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100-m02
	I0407 13:04:03.329413    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.329413    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.329413    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.333484    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:04:03.333919    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:03.333971    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.333971    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.333971    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.338283    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:03.338837    7088 pod_ready.go:93] pod "etcd-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.339393    7088 pod_ready.go:82] duration metric: took 10.1349ms for pod "etcd-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.339393    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.425486    7088 request.go:661] Waited for 86.0924ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100
	I0407 13:04:03.425486    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100
	I0407 13:04:03.425486    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.425486    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.425486    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.430341    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:03.624998    7088 request.go:661] Waited for 193.0367ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.624998    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:03.624998    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.624998    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.624998    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.630110    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:03.630338    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:03.630338    7088 pod_ready.go:82] duration metric: took 290.9429ms for pod "kube-apiserver-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.630338    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:03.824909    7088 request.go:661] Waited for 194.5703ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m02
	I0407 13:04:03.825353    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m02
	I0407 13:04:03.825353    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:03.825353    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:03.825353    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:03.830917    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:04.025154    7088 request.go:661] Waited for 193.8529ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:04.025154    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:04.025610    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.025653    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.025653    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.029867    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:04:04.030168    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:04.030168    7088 pod_ready.go:82] duration metric: took 399.8289ms for pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.030168    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.225325    7088 request.go:661] Waited for 195.1555ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100
	I0407 13:04:04.225884    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100
	I0407 13:04:04.225923    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.225923    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.225985    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.230163    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:04.425438    7088 request.go:661] Waited for 195.0674ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:04.425438    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:04.425438    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.425438    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.425438    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.429802    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:04.429802    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:04.429802    7088 pod_ready.go:82] duration metric: took 399.6318ms for pod "kube-controller-manager-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.430340    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.625506    7088 request.go:661] Waited for 195.1039ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m02
	I0407 13:04:04.625506    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m02
	I0407 13:04:04.625506    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.625506    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.626069    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.630887    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:04.825571    7088 request.go:661] Waited for 194.2757ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:04.825864    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:04.826014    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:04.826068    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:04.826068    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:04.832425    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:04.832425    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:04.832962    7088 pod_ready.go:82] duration metric: took 402.6207ms for pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:04.832962    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sxkgm" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.025129    7088 request.go:661] Waited for 191.922ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxkgm
	I0407 13:04:05.025129    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxkgm
	I0407 13:04:05.025129    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.025129    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.025129    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.030525    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:05.224801    7088 request.go:661] Waited for 193.7488ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:05.225252    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:05.225252    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.225252    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.225252    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.229861    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:05.229861    7088 pod_ready.go:93] pod "kube-proxy-sxkgm" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:05.229861    7088 pod_ready.go:82] duration metric: took 396.8974ms for pod "kube-proxy-sxkgm" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.229861    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsgf7" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.424981    7088 request.go:661] Waited for 195.1194ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsgf7
	I0407 13:04:05.424981    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsgf7
	I0407 13:04:05.424981    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.424981    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.424981    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.431627    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:05.625673    7088 request.go:661] Waited for 193.5397ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:05.626176    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:05.626176    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.626176    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.626242    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.630567    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:05.630567    7088 pod_ready.go:93] pod "kube-proxy-xsgf7" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:05.630567    7088 pod_ready.go:82] duration metric: took 400.7039ms for pod "kube-proxy-xsgf7" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.630567    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:05.824913    7088 request.go:661] Waited for 194.3451ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100
	I0407 13:04:05.824913    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100
	I0407 13:04:05.824913    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:05.824913    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:05.824913    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:05.829608    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:06.025096    7088 request.go:661] Waited for 194.9629ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:06.025466    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:04:06.025466    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.025466    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.025466    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.035456    7088 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 13:04:06.036010    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:06.036052    7088 pod_ready.go:82] duration metric: took 405.4837ms for pod "kube-scheduler-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:06.036076    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:06.225413    7088 request.go:661] Waited for 189.336ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m02
	I0407 13:04:06.225810    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m02
	I0407 13:04:06.225810    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.225810    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.225810    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.231125    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:06.425918    7088 request.go:661] Waited for 194.3474ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:06.425918    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:04:06.425918    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.425918    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.425918    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.433161    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:04:06.434280    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:04:06.434280    7088 pod_ready.go:82] duration metric: took 398.202ms for pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:04:06.434280    7088 pod_ready.go:39] duration metric: took 3.1946823s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:04:06.434280    7088 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:04:06.446383    7088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:04:06.472714    7088 api_server.go:72] duration metric: took 27.6941438s to wait for apiserver process to appear ...
	I0407 13:04:06.472714    7088 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:04:06.472714    7088 api_server.go:253] Checking apiserver healthz at https://172.17.95.223:8443/healthz ...
	I0407 13:04:06.485038    7088 api_server.go:279] https://172.17.95.223:8443/healthz returned 200:
	ok
	I0407 13:04:06.485176    7088 round_trippers.go:470] GET https://172.17.95.223:8443/version
	I0407 13:04:06.485194    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.485194    7088 round_trippers.go:480]     Accept: application/json, */*
	I0407 13:04:06.485194    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.486947    7088 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 13:04:06.486947    7088 api_server.go:141] control plane version: v1.32.2
	I0407 13:04:06.486947    7088 api_server.go:131] duration metric: took 14.2338ms to wait for apiserver health ...
	I0407 13:04:06.486947    7088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:04:06.625358    7088 request.go:661] Waited for 137.8702ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:06.625358    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:06.625358    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.625358    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.625358    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.631889    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:06.634000    7088 system_pods.go:59] 17 kube-system pods found
	I0407 13:04:06.634000    7088 system_pods.go:61] "coredns-668d6bf9bc-whpg2" [48faa3ce-0f1f-4c88-8298-15960d3c75a7] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "coredns-668d6bf9bc-z4nkw" [4aa968e7-d945-4f70-932d-b42417702382] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "etcd-ha-573100" [c473d0ab-e66d-4b41-ad43-edce5e371027] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "etcd-ha-573100-m02" [0f05d56b-d0f5-4505-9d54-127111d30d27] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kindnet-fxxw5" [4fc9602a-d72f-4421-96a3-a7b0b35e2ce6] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kindnet-vhm9b" [355feff9-5819-4d85-82f0-2281fdcc5d5a] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-apiserver-ha-573100" [60830754-3b25-4753-9ec0-d9cef7b7b548] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-apiserver-ha-573100-m02" [5fa8bf0c-a2ff-4b0d-8e9f-a42172533517] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-controller-manager-ha-573100" [0c4d6f0d-d4ae-40cd-bfa7-b7f39dff081e] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-controller-manager-ha-573100-m02" [cb31520b-fa77-4ceb-a798-c45f10c87d10] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-proxy-sxkgm" [6e0a6f3f-a949-4b95-aaaa-d74c1a7e0efe] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-proxy-xsgf7" [1bccfdb6-28f7-4190-a5a1-9316cfdf215e] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-scheduler-ha-573100" [d46211dc-ab95-474b-abfc-218808a4d1aa] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-scheduler-ha-573100-m02" [1fd3b48a-ef70-4cce-b7d4-24b44331bfba] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-vip-ha-573100" [b8e24d1a-1309-482f-9734-99bcf4812448] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "kube-vip-ha-573100-m02" [6e3ad003-a31a-49de-841f-2e21e31f094d] Running
	I0407 13:04:06.634000    7088 system_pods.go:61] "storage-provisioner" [8d89f971-c575-4089-b12b-823fe7524dc2] Running
	I0407 13:04:06.634000    7088 system_pods.go:74] duration metric: took 147.052ms to wait for pod list to return data ...
	I0407 13:04:06.634000    7088 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:04:06.825527    7088 request.go:661] Waited for 191.5263ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/default/serviceaccounts
	I0407 13:04:06.826009    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/default/serviceaccounts
	I0407 13:04:06.826090    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:06.826090    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:06.826090    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:06.830792    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:04:06.831512    7088 default_sa.go:45] found service account: "default"
	I0407 13:04:06.831546    7088 default_sa.go:55] duration metric: took 197.5448ms for default service account to be created ...
	I0407 13:04:06.831602    7088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:04:07.025173    7088 request.go:661] Waited for 193.5214ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:07.025173    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:04:07.025639    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:07.025639    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:07.025639    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:07.031498    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:04:07.034304    7088 system_pods.go:86] 17 kube-system pods found
	I0407 13:04:07.034347    7088 system_pods.go:89] "coredns-668d6bf9bc-whpg2" [48faa3ce-0f1f-4c88-8298-15960d3c75a7] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "coredns-668d6bf9bc-z4nkw" [4aa968e7-d945-4f70-932d-b42417702382] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "etcd-ha-573100" [c473d0ab-e66d-4b41-ad43-edce5e371027] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "etcd-ha-573100-m02" [0f05d56b-d0f5-4505-9d54-127111d30d27] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kindnet-fxxw5" [4fc9602a-d72f-4421-96a3-a7b0b35e2ce6] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kindnet-vhm9b" [355feff9-5819-4d85-82f0-2281fdcc5d5a] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kube-apiserver-ha-573100" [60830754-3b25-4753-9ec0-d9cef7b7b548] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kube-apiserver-ha-573100-m02" [5fa8bf0c-a2ff-4b0d-8e9f-a42172533517] Running
	I0407 13:04:07.034394    7088 system_pods.go:89] "kube-controller-manager-ha-573100" [0c4d6f0d-d4ae-40cd-bfa7-b7f39dff081e] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-controller-manager-ha-573100-m02" [cb31520b-fa77-4ceb-a798-c45f10c87d10] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-proxy-sxkgm" [6e0a6f3f-a949-4b95-aaaa-d74c1a7e0efe] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-proxy-xsgf7" [1bccfdb6-28f7-4190-a5a1-9316cfdf215e] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-scheduler-ha-573100" [d46211dc-ab95-474b-abfc-218808a4d1aa] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-scheduler-ha-573100-m02" [1fd3b48a-ef70-4cce-b7d4-24b44331bfba] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-vip-ha-573100" [b8e24d1a-1309-482f-9734-99bcf4812448] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "kube-vip-ha-573100-m02" [6e3ad003-a31a-49de-841f-2e21e31f094d] Running
	I0407 13:04:07.034586    7088 system_pods.go:89] "storage-provisioner" [8d89f971-c575-4089-b12b-823fe7524dc2] Running
	I0407 13:04:07.034586    7088 system_pods.go:126] duration metric: took 202.9824ms to wait for k8s-apps to be running ...
	I0407 13:04:07.034712    7088 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:04:07.046776    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:04:07.069495    7088 system_svc.go:56] duration metric: took 34.9092ms WaitForService to wait for kubelet
	I0407 13:04:07.069495    7088 kubeadm.go:582] duration metric: took 28.2909227s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:04:07.070506    7088 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:04:07.225437    7088 request.go:661] Waited for 154.931ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes
	I0407 13:04:07.225437    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes
	I0407 13:04:07.225437    7088 round_trippers.go:476] Request Headers:
	I0407 13:04:07.225437    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:04:07.225437    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:04:07.231866    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:04:07.232462    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:04:07.232515    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:04:07.232594    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:04:07.232594    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:04:07.232594    7088 node_conditions.go:105] duration metric: took 162.0875ms to run NodePressure ...
	I0407 13:04:07.232594    7088 start.go:241] waiting for startup goroutines ...
	I0407 13:04:07.232594    7088 start.go:255] writing updated cluster config ...
	I0407 13:04:07.238145    7088 out.go:201] 
	I0407 13:04:07.258378    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:04:07.258600    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:04:07.263992    7088 out.go:177] * Starting "ha-573100-m03" control-plane node in "ha-573100" cluster
	I0407 13:04:07.267324    7088 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:04:07.267324    7088 cache.go:56] Caching tarball of preloaded images
	I0407 13:04:07.268372    7088 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 13:04:07.268372    7088 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 13:04:07.268372    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:04:07.276648    7088 start.go:360] acquireMachinesLock for ha-573100-m03: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:04:07.276648    7088 start.go:364] duration metric: took 0s to acquireMachinesLock for "ha-573100-m03"
	I0407 13:04:07.276648    7088 start.go:93] Provisioning new machine with config: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName
:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false ins
pektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror
: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:04:07.276648    7088 start.go:125] createHost starting for "m03" (driver="hyperv")
	I0407 13:04:07.282944    7088 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 13:04:07.282944    7088 start.go:159] libmachine.API.Create for "ha-573100" (driver="hyperv")
	I0407 13:04:07.283589    7088 client.go:168] LocalClient.Create starting
	I0407 13:04:07.283822    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 13:04:07.284488    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 13:04:07.284488    7088 main.go:141] libmachine: Parsing certificate...
	I0407 13:04:07.284787    7088 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 13:04:07.284972    7088 main.go:141] libmachine: Decoding PEM data...
	I0407 13:04:07.284972    7088 main.go:141] libmachine: Parsing certificate...
	I0407 13:04:07.284972    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 13:04:09.160636    7088 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 13:04:09.160866    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:09.160866    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 13:04:10.847787    7088 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 13:04:10.847787    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:10.848058    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:04:12.338268    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:04:12.338268    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:12.338603    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:04:16.045163    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:04:16.045163    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:16.047527    7088 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:04:16.543823    7088 main.go:141] libmachine: Creating SSH key...
	I0407 13:04:17.031475    7088 main.go:141] libmachine: Creating VM...
	I0407 13:04:17.032557    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:04:19.904079    7088 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:04:19.904204    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:19.904388    7088 main.go:141] libmachine: Using switch "Default Switch"
	I0407 13:04:19.904449    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:04:21.661704    7088 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:04:21.661704    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:21.662184    7088 main.go:141] libmachine: Creating VHD
	I0407 13:04:21.662184    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 13:04:25.491148    7088 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 1AF31488-5A7D-43FF-A7AF-C656F6973173
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 13:04:25.491900    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:25.491900    7088 main.go:141] libmachine: Writing magic tar header
	I0407 13:04:25.491900    7088 main.go:141] libmachine: Writing SSH key tar header
	I0407 13:04:25.505275    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 13:04:28.727771    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:28.727771    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:28.728475    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\disk.vhd' -SizeBytes 20000MB
	I0407 13:04:31.318240    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:31.318698    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:31.318698    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM ha-573100-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0407 13:04:35.051272    7088 main.go:141] libmachine: [stdout =====>] : 
	Name          State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----          ----- ----------- ----------------- ------   ------             -------
	ha-573100-m03 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 13:04:35.051272    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:35.052145    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName ha-573100-m03 -DynamicMemoryEnabled $false
	I0407 13:04:37.359544    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:37.359720    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:37.359720    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor ha-573100-m03 -Count 2
	I0407 13:04:39.640505    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:39.640612    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:39.640612    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName ha-573100-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\boot2docker.iso'
	I0407 13:04:42.267283    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:42.267283    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:42.267283    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName ha-573100-m03 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\disk.vhd'
	I0407 13:04:44.963564    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:44.964304    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:44.964304    7088 main.go:141] libmachine: Starting VM...
	I0407 13:04:44.964368    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM ha-573100-m03
	I0407 13:04:48.147527    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:48.147527    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:48.147527    7088 main.go:141] libmachine: Waiting for host to start...
	I0407 13:04:48.147621    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:04:50.469306    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:04:50.469306    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:50.469736    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:04:53.004289    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:53.004419    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:54.004847    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:04:56.282109    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:04:56.282199    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:56.282199    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:04:58.842581    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:04:58.842581    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:04:59.842758    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:02.101253    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:02.101802    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:02.101802    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:04.622860    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:05:04.623874    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:05.625093    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:07.839447    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:07.840154    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:07.840154    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:10.439212    7088 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:05:10.440169    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:11.441232    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:13.706866    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:13.707743    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:13.707743    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:16.368666    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:16.368666    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:16.368666    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:18.488237    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:18.488237    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:18.488237    7088 machine.go:93] provisionDockerMachine start ...
	I0407 13:05:18.488237    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:20.654388    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:20.654456    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:20.654456    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:23.210635    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:23.211288    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:23.219661    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:23.236391    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:23.236391    7088 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:05:23.373662    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 13:05:23.373662    7088 buildroot.go:166] provisioning hostname "ha-573100-m03"
	I0407 13:05:23.373662    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:25.511025    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:25.511666    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:25.511666    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:28.070653    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:28.070653    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:28.077488    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:28.078079    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:28.078160    7088 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-573100-m03 && echo "ha-573100-m03" | sudo tee /etc/hostname
	I0407 13:05:28.254477    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-573100-m03
	
	I0407 13:05:28.254477    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:30.469936    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:30.470295    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:30.470295    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:33.060519    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:33.060519    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:33.067199    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:33.067259    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:33.067259    7088 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-573100-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-573100-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-573100-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:05:33.225986    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:05:33.225986    7088 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 13:05:33.225986    7088 buildroot.go:174] setting up certificates
	I0407 13:05:33.226515    7088 provision.go:84] configureAuth start
	I0407 13:05:33.226616    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:35.396565    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:35.397012    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:35.397012    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:37.989220    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:37.989220    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:37.989452    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:40.159704    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:40.159802    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:40.159865    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:42.743428    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:42.743945    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:42.744021    7088 provision.go:143] copyHostCerts
	I0407 13:05:42.744021    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 13:05:42.744021    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 13:05:42.744021    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 13:05:42.744782    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 13:05:42.745467    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 13:05:42.746245    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 13:05:42.746290    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 13:05:42.746290    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 13:05:42.747574    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 13:05:42.747574    7088 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 13:05:42.747574    7088 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 13:05:42.747574    7088 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 13:05:42.749383    7088 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.ha-573100-m03 san=[127.0.0.1 172.17.94.27 ha-573100-m03 localhost minikube]
	I0407 13:05:42.859521    7088 provision.go:177] copyRemoteCerts
	I0407 13:05:42.869470    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:05:42.869470    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:45.016319    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:45.016319    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:45.016401    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:47.571174    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:47.572152    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:47.572395    7088 sshutil.go:53] new ssh client: &{IP:172.17.94.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\id_rsa Username:docker}
	I0407 13:05:47.677174    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8076259s)
	I0407 13:05:47.677230    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 13:05:47.677524    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:05:47.724728    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 13:05:47.725205    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1208 bytes)
	I0407 13:05:47.768523    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 13:05:47.769032    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 13:05:47.818533    7088 provision.go:87] duration metric: took 14.5919517s to configureAuth
	I0407 13:05:47.818593    7088 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:05:47.819211    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:05:47.819379    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:50.018080    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:50.018588    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:50.018588    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:52.559397    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:52.559681    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:52.564228    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:52.564881    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:52.564881    7088 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 13:05:52.691946    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 13:05:52.692084    7088 buildroot.go:70] root file system type: tmpfs
	I0407 13:05:52.692251    7088 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 13:05:52.692340    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:54.831347    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:54.832407    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:54.832463    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:05:57.390838    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:05:57.391081    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:57.396323    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:05:57.396922    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:05:57.396922    7088 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.95.223"
	Environment="NO_PROXY=172.17.95.223,172.17.82.162"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 13:05:57.545430    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.95.223
	Environment=NO_PROXY=172.17.95.223,172.17.82.162
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 13:05:57.545430    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:05:59.684717    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:05:59.684717    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:05:59.685168    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:02.302488    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:02.302488    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:02.309086    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:06:02.309698    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:06:02.309698    7088 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 13:06:04.544559    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 13:06:04.544559    7088 machine.go:96] duration metric: took 46.0561148s to provisionDockerMachine
	I0407 13:06:04.544559    7088 client.go:171] duration metric: took 1m57.2604433s to LocalClient.Create
	I0407 13:06:04.544559    7088 start.go:167] duration metric: took 1m57.2610887s to libmachine.API.Create "ha-573100"
	I0407 13:06:04.544559    7088 start.go:293] postStartSetup for "ha-573100-m03" (driver="hyperv")
	I0407 13:06:04.544853    7088 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:06:04.556145    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:06:04.556145    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:06.714337    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:06.714337    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:06.714581    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:09.257215    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:09.257215    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:09.257215    7088 sshutil.go:53] new ssh client: &{IP:172.17.94.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\id_rsa Username:docker}
	I0407 13:06:09.369817    7088 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8136498s)
	I0407 13:06:09.380938    7088 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:06:09.388496    7088 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:06:09.388496    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 13:06:09.389220    7088 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 13:06:09.390182    7088 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 13:06:09.390182    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 13:06:09.401303    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:06:09.418575    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 13:06:09.464779    7088 start.go:296] duration metric: took 4.9199041s for postStartSetup
	I0407 13:06:09.468301    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:11.627432    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:11.627752    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:11.627752    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:14.181255    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:14.181255    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:14.181893    7088 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\config.json ...
	I0407 13:06:14.184106    7088 start.go:128] duration metric: took 2m6.9068881s to createHost
	I0407 13:06:14.184106    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:16.417020    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:16.417020    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:16.417613    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:18.982886    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:18.982886    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:18.988627    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:06:18.989402    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:06:18.989402    7088 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:06:19.122806    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744031179.138002811
	
	I0407 13:06:19.122806    7088 fix.go:216] guest clock: 1744031179.138002811
	I0407 13:06:19.122806    7088 fix.go:229] Guest: 2025-04-07 13:06:19.138002811 +0000 UTC Remote: 2025-04-07 13:06:14.1841065 +0000 UTC m=+557.623865201 (delta=4.953896311s)
	I0407 13:06:19.122806    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:21.273857    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:21.273857    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:21.273857    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:23.840987    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:23.840987    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:23.846844    7088 main.go:141] libmachine: Using SSH client type: native
	I0407 13:06:23.847531    7088 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.27 22 <nil> <nil>}
	I0407 13:06:23.847601    7088 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744031179
	I0407 13:06:23.994379    7088 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 13:06:19 UTC 2025
	
	I0407 13:06:23.994379    7088 fix.go:236] clock set: Mon Apr  7 13:06:19 UTC 2025
	 (err=<nil>)
	I0407 13:06:23.994379    7088 start.go:83] releasing machines lock for "ha-573100-m03", held for 2m16.7171165s
	I0407 13:06:23.994379    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:26.166458    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:26.167520    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:26.167552    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:28.758265    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:28.758265    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:28.761954    7088 out.go:177] * Found network options:
	I0407 13:06:28.765600    7088 out.go:177]   - NO_PROXY=172.17.95.223,172.17.82.162
	W0407 13:06:28.768383    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:06:28.768383    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 13:06:28.770330    7088 out.go:177]   - NO_PROXY=172.17.95.223,172.17.82.162
	W0407 13:06:28.774296    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:06:28.774296    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:06:28.775907    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 13:06:28.776090    7088 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 13:06:28.778094    7088 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 13:06:28.778619    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:28.791182    7088 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:06:28.791182    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100-m03 ).state
	I0407 13:06:31.060425    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:31.061355    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:31.061355    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:31.081548    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:31.081548    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:31.082206    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100-m03 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:33.884512    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:33.884774    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:33.884928    7088 sshutil.go:53] new ssh client: &{IP:172.17.94.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\id_rsa Username:docker}
	I0407 13:06:33.910632    7088 main.go:141] libmachine: [stdout =====>] : 172.17.94.27
	
	I0407 13:06:33.910968    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:33.911178    7088 sshutil.go:53] new ssh client: &{IP:172.17.94.27 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100-m03\id_rsa Username:docker}
	I0407 13:06:33.975176    7088 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.1969433s)
	W0407 13:06:33.975286    7088 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 13:06:34.010779    7088 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.2194683s)
	W0407 13:06:34.010779    7088 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:06:34.022814    7088 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:06:34.062056    7088 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:06:34.062135    7088 start.go:495] detecting cgroup driver to use...
	I0407 13:06:34.062371    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0407 13:06:34.072679    7088 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 13:06:34.072679    7088 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 13:06:34.114289    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 13:06:34.146301    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:06:34.166314    7088 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 13:06:34.176820    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:06:34.210413    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:06:34.241373    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:06:34.271361    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:06:34.307544    7088 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:06:34.337585    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:06:34.373277    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 13:06:34.407770    7088 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 13:06:34.440791    7088 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:06:34.458779    7088 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:06:34.469773    7088 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:06:34.514226    7088 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:06:34.543526    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:34.744631    7088 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:06:34.776921    7088 start.go:495] detecting cgroup driver to use...
	I0407 13:06:34.788915    7088 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 13:06:34.822330    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:06:34.856625    7088 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:06:34.899766    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:06:34.935659    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:06:34.971095    7088 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 13:06:35.040477    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:06:35.066651    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:06:35.111501    7088 ssh_runner.go:195] Run: which cri-dockerd
	I0407 13:06:35.128878    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 13:06:35.145443    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 13:06:35.192232    7088 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 13:06:35.399590    7088 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 13:06:35.595188    7088 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 13:06:35.595295    7088 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 13:06:35.639760    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:35.828328    7088 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 13:06:38.443523    7088 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.6150678s)
	I0407 13:06:38.455396    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 13:06:38.489967    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:06:38.536741    7088 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 13:06:38.724195    7088 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 13:06:38.906819    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:39.091868    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 13:06:39.133123    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:06:39.172505    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:39.369060    7088 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 13:06:39.477345    7088 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 13:06:39.490031    7088 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 13:06:39.498975    7088 start.go:563] Will wait 60s for crictl version
	I0407 13:06:39.511798    7088 ssh_runner.go:195] Run: which crictl
	I0407 13:06:39.529962    7088 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:06:39.586030    7088 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 13:06:39.596402    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:06:39.637691    7088 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:06:39.675255    7088 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 13:06:39.678475    7088 out.go:177]   - env NO_PROXY=172.17.95.223
	I0407 13:06:39.680944    7088 out.go:177]   - env NO_PROXY=172.17.95.223,172.17.82.162
	I0407 13:06:39.684119    7088 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 13:06:39.689610    7088 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 13:06:39.689610    7088 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 13:06:39.689610    7088 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 13:06:39.689610    7088 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 13:06:39.693526    7088 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 13:06:39.693526    7088 ip.go:214] interface addr: 172.17.80.1/20
	I0407 13:06:39.708269    7088 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 13:06:39.713938    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:06:39.738996    7088 mustload.go:65] Loading cluster: ha-573100
	I0407 13:06:39.739841    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:06:39.740600    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:06:41.904483    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:41.904830    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:41.904830    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:06:41.905516    7088 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100 for IP: 172.17.94.27
	I0407 13:06:41.905516    7088 certs.go:194] generating shared ca certs ...
	I0407 13:06:41.905574    7088 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:06:41.906311    7088 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 13:06:41.906620    7088 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 13:06:41.906964    7088 certs.go:256] generating profile certs ...
	I0407 13:06:41.907511    7088 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\client.key
	I0407 13:06:41.907687    7088 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.034140ef
	I0407 13:06:41.907732    7088 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.034140ef with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.95.223 172.17.82.162 172.17.94.27 172.17.95.254]
	I0407 13:06:42.163160    7088 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.034140ef ...
	I0407 13:06:42.163160    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.034140ef: {Name:mkcb32ba08db63a84c65679bc81879233c0f3f5e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:06:42.164281    7088 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.034140ef ...
	I0407 13:06:42.164281    7088 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.034140ef: {Name:mk5617a33f3125826c920bd0ef10e498536f2e4a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:06:42.165282    7088 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt.034140ef -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt
	I0407 13:06:42.182405    7088 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key.034140ef -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key
	I0407 13:06:42.184004    7088 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key
	I0407 13:06:42.184004    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 13:06:42.184078    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 13:06:42.184078    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 13:06:42.184078    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 13:06:42.184078    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0407 13:06:42.184657    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0407 13:06:42.185234    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0407 13:06:42.185525    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0407 13:06:42.186038    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 13:06:42.186346    7088 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 13:06:42.186423    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 13:06:42.186654    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 13:06:42.186863    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 13:06:42.186863    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 13:06:42.187556    7088 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 13:06:42.187556    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:06:42.187556    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 13:06:42.187556    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 13:06:42.188236    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:06:44.416794    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:44.417138    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:44.417138    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:46.969951    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:06:46.970682    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:46.970862    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:06:47.070247    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0407 13:06:47.077903    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0407 13:06:47.114307    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0407 13:06:47.121213    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0407 13:06:47.151795    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0407 13:06:47.159244    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0407 13:06:47.197401    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0407 13:06:47.208344    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0407 13:06:47.236743    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0407 13:06:47.244538    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0407 13:06:47.272970    7088 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0407 13:06:47.283740    7088 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1675 bytes)
	I0407 13:06:47.303796    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:06:47.349171    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:06:47.391586    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:06:47.434809    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 13:06:47.480017    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0407 13:06:47.524566    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 13:06:47.569494    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:06:47.614761    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\ha-573100\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:06:47.663490    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:06:47.712791    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 13:06:47.755113    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 13:06:47.798492    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0407 13:06:47.830431    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0407 13:06:47.861393    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0407 13:06:47.892099    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0407 13:06:47.922922    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0407 13:06:47.954629    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1675 bytes)
	I0407 13:06:47.987178    7088 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0407 13:06:48.030307    7088 ssh_runner.go:195] Run: openssl version
	I0407 13:06:48.056770    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 13:06:48.093086    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 13:06:48.102031    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 13:06:48.112870    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 13:06:48.133417    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:06:48.163296    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:06:48.195562    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:06:48.202950    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:06:48.213785    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:06:48.234478    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:06:48.264378    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 13:06:48.292649    7088 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 13:06:48.300010    7088 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 13:06:48.310632    7088 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 13:06:48.330556    7088 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 13:06:48.360043    7088 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:06:48.366461    7088 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:06:48.366713    7088 kubeadm.go:934] updating node {m03 172.17.94.27 8443 v1.32.2 docker true true} ...
	I0407 13:06:48.366918    7088 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-573100-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.94.27
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:default APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:06:48.366970    7088 kube-vip.go:115] generating kube-vip config ...
	I0407 13:06:48.377358    7088 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0407 13:06:48.402103    7088 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0407 13:06:48.402307    7088 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 172.17.95.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.10
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0407 13:06:48.415366    7088 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:06:48.432165    7088 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0407 13:06:48.442773    7088 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0407 13:06:48.463213    7088 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0407 13:06:48.463287    7088 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0407 13:06:48.463407    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 13:06:48.463407    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 13:06:48.463521    7088 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0407 13:06:48.477164    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:06:48.477591    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 13:06:48.483196    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 13:06:48.500225    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0407 13:06:48.500225    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0407 13:06:48.500225    7088 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 13:06:48.500225    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0407 13:06:48.500225    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0407 13:06:48.510901    7088 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 13:06:48.562416    7088 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0407 13:06:48.562760    7088 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0407 13:06:49.794804    7088 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0407 13:06:49.814094    7088 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (312 bytes)
	I0407 13:06:49.857522    7088 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:06:49.897176    7088 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1441 bytes)
	I0407 13:06:49.939080    7088 ssh_runner.go:195] Run: grep 172.17.95.254	control-plane.minikube.internal$ /etc/hosts
	I0407 13:06:49.946559    7088 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.95.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:06:49.977295    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:06:50.194251    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:06:50.221374    7088 host.go:66] Checking if "ha-573100" exists ...
	I0407 13:06:50.222248    7088 start.go:317] joinCluster: &{Name:ha-573100 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ha-573100 Namespace:def
ault APIServerHAVIP:172.17.95.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.95.223 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.162 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m03 IP:172.17.94.27 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:06:50.222536    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0407 13:06:50.222626    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM ha-573100 ).state
	I0407 13:06:52.404003    7088 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:06:52.404003    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:52.404086    7088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM ha-573100 ).networkadapters[0]).ipaddresses[0]
	I0407 13:06:54.996292    7088 main.go:141] libmachine: [stdout =====>] : 172.17.95.223
	
	I0407 13:06:54.996292    7088 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:06:54.997153    7088 sshutil.go:53] new ssh client: &{IP:172.17.95.223 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\ha-573100\id_rsa Username:docker}
	I0407 13:06:55.206617    7088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (4.9839865s)
	I0407 13:06:55.206735    7088 start.go:343] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:172.17.94.27 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:06:55.206879    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i82xup.3c4guti3nmbehjm7 --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-573100-m03 --control-plane --apiserver-advertise-address=172.17.94.27 --apiserver-bind-port=8443"
	I0407 13:07:37.435816    7088 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token i82xup.3c4guti3nmbehjm7 --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=ha-573100-m03 --control-plane --apiserver-advertise-address=172.17.94.27 --apiserver-bind-port=8443": (42.2286799s)
	I0407 13:07:37.435877    7088 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0407 13:07:38.163288    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-573100-m03 minikube.k8s.io/updated_at=2025_04_07T13_07_38_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=ha-573100 minikube.k8s.io/primary=false
	I0407 13:07:38.354137    7088 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-573100-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0407 13:07:38.552775    7088 start.go:319] duration metric: took 48.3303045s to joinCluster
	I0407 13:07:38.553084    7088 start.go:235] Will wait 6m0s for node &{Name:m03 IP:172.17.94.27 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:07:38.554309    7088 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:07:38.555957    7088 out.go:177] * Verifying Kubernetes components...
	I0407 13:07:38.572183    7088 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:07:38.957903    7088 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:07:39.001876    7088 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 13:07:39.002052    7088 kapi.go:59] client config for ha-573100: &rest.Config{Host:"https://172.17.95.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\ha-573100\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), Nex
tProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0407 13:07:39.002588    7088 kubeadm.go:483] Overriding stale ClientConfig host https://172.17.95.254:8443 with https://172.17.95.223:8443
	I0407 13:07:39.003766    7088 node_ready.go:35] waiting up to 6m0s for node "ha-573100-m03" to be "Ready" ...
	I0407 13:07:39.003766    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:39.003766    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:39.003766    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:39.003766    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:39.018546    7088 round_trippers.go:581] Response Status: 200 OK in 14 milliseconds
	I0407 13:07:39.504540    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:39.504540    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:39.504540    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:39.504540    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:39.510541    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:40.004559    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:40.004559    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:40.004559    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:40.004559    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:40.010572    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:40.504774    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:40.504774    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:40.504774    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:40.504774    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:40.511951    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:07:41.006640    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:41.006697    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:41.006697    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:41.006697    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:41.012117    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:41.012490    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:41.505069    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:41.505069    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:41.505069    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:41.505069    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:41.511437    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:42.004117    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:42.004485    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:42.004485    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:42.004485    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:42.008774    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:42.504831    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:42.504952    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:42.504952    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:42.504952    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:42.846676    7088 round_trippers.go:581] Response Status: 200 OK in 341 milliseconds
	I0407 13:07:43.004405    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:43.004405    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:43.004405    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:43.004405    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:43.138046    7088 round_trippers.go:581] Response Status: 200 OK in 133 milliseconds
	I0407 13:07:43.138620    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:43.504325    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:43.504325    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:43.504325    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:43.504325    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:43.509322    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:44.004474    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:44.004474    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:44.004474    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:44.004474    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:44.010492    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:44.504721    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:44.504721    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:44.504793    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:44.504793    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:44.509203    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:45.004678    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:45.004678    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:45.004678    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:45.004678    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:45.082222    7088 round_trippers.go:581] Response Status: 200 OK in 77 milliseconds
	I0407 13:07:45.504719    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:45.504719    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:45.504719    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:45.504719    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:45.531624    7088 round_trippers.go:581] Response Status: 200 OK in 26 milliseconds
	I0407 13:07:45.532650    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:46.004139    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:46.004139    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:46.004139    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:46.004139    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:46.008764    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:46.504714    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:46.504714    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:46.504714    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:46.504714    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:46.510170    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:47.004601    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:47.004601    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:47.004668    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:47.004668    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:47.024926    7088 round_trippers.go:581] Response Status: 200 OK in 20 milliseconds
	I0407 13:07:47.504741    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:47.504741    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:47.504741    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:47.504741    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:47.513942    7088 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 13:07:48.005138    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:48.005138    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:48.005138    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:48.005138    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:48.010001    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:48.010374    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:48.504362    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:48.504362    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:48.504362    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:48.504362    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:48.510542    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:49.003962    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:49.003962    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:49.003962    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:49.003962    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:49.009521    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:49.504567    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:49.504567    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:49.504567    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:49.504567    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:49.510810    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:50.004106    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:50.004106    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:50.004106    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:50.004106    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:50.009146    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:50.504256    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:50.504673    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:50.504673    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:50.504673    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:50.510312    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:50.510647    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:51.004934    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:51.004934    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:51.004934    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:51.004934    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:51.010349    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:51.504459    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:51.504503    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:51.504503    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:51.504503    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:51.509822    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:52.004031    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:52.004031    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:52.004508    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:52.004508    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:52.008186    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:07:52.506149    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:52.506260    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:52.506260    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:52.506366    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:52.512083    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:52.512083    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:53.004261    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:53.004261    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:53.004261    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:53.004261    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:53.009813    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:53.505000    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:53.505000    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:53.505000    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:53.505000    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:53.510588    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:54.004001    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:54.004001    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:54.004001    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:54.004001    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:54.009257    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:54.504566    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:54.504989    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:54.504989    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:54.505066    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:54.510324    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:55.005257    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:55.005257    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:55.005257    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:55.005257    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:55.011064    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:55.011064    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:55.505042    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:55.505500    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:55.505500    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:55.505500    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:55.511088    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:56.004432    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:56.004432    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:56.004432    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:56.004432    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:56.017908    7088 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0407 13:07:56.505190    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:56.505277    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:56.505277    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:56.505277    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:56.512696    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:07:57.004187    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:57.004187    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:57.004187    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:57.004187    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:57.009848    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:57.504636    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:57.504717    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:57.504717    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:57.504717    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:57.509663    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:57.510664    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:07:58.004943    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:58.004943    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:58.004943    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:58.004943    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:58.008962    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:07:58.504445    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:58.504445    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:58.504445    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:58.504445    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:58.509756    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:59.005536    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:59.005536    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:59.005536    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:59.005536    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:59.011371    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:07:59.504688    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:07:59.504688    7088 round_trippers.go:476] Request Headers:
	I0407 13:07:59.504688    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:07:59.504688    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:07:59.510911    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:07:59.511085    7088 node_ready.go:53] node "ha-573100-m03" has status "Ready":"False"
	I0407 13:08:00.005137    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:00.005137    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.005137    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.005137    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.010134    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:00.011340    7088 node_ready.go:49] node "ha-573100-m03" has status "Ready":"True"
	I0407 13:08:00.011397    7088 node_ready.go:38] duration metric: took 21.0075341s for node "ha-573100-m03" to be "Ready" ...
	I0407 13:08:00.011480    7088 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:08:00.011655    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:00.011655    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.011655    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.011655    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.017282    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:00.020639    7088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.020729    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-whpg2
	I0407 13:08:00.020729    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.020729    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.020880    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.028218    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:00.029226    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.029226    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.029226    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.029226    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.034234    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:00.035256    7088 pod_ready.go:93] pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.035256    7088 pod_ready.go:82] duration metric: took 14.6173ms for pod "coredns-668d6bf9bc-whpg2" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.035357    7088 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.035357    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-z4nkw
	I0407 13:08:00.035357    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.035357    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.035357    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.043391    7088 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0407 13:08:00.043948    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.044011    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.044011    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.044011    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.047769    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:08:00.047769    7088 pod_ready.go:93] pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.047769    7088 pod_ready.go:82] duration metric: took 12.4122ms for pod "coredns-668d6bf9bc-z4nkw" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.047769    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.048389    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100
	I0407 13:08:00.048389    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.048389    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.048496    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.069798    7088 round_trippers.go:581] Response Status: 200 OK in 21 milliseconds
	I0407 13:08:00.070306    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.070306    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.070306    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.070306    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.073901    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:08:00.075287    7088 pod_ready.go:93] pod "etcd-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.075350    7088 pod_ready.go:82] duration metric: took 27.0537ms for pod "etcd-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.075350    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.075476    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100-m02
	I0407 13:08:00.075507    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.075507    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.075507    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.078895    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:08:00.078895    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:00.078895    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.078895    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.078895    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.085307    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:00.085528    7088 pod_ready.go:93] pod "etcd-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.085623    7088 pod_ready.go:82] duration metric: took 10.2735ms for pod "etcd-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.085623    7088 pod_ready.go:79] waiting up to 6m0s for pod "etcd-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.206212    7088 request.go:661] Waited for 120.4521ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100-m03
	I0407 13:08:00.206504    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/etcd-ha-573100-m03
	I0407 13:08:00.206711    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.206711    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.206792    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.211626    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:00.405419    7088 request.go:661] Waited for 193.2891ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:00.405834    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:00.405868    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.405868    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.405868    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.413631    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:00.415874    7088 pod_ready.go:93] pod "etcd-ha-573100-m03" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.415874    7088 pod_ready.go:82] duration metric: took 330.2489ms for pod "etcd-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.415874    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.606483    7088 request.go:661] Waited for 190.6078ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100
	I0407 13:08:00.606483    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100
	I0407 13:08:00.606483    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.606483    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.606483    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.611856    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:00.805396    7088 request.go:661] Waited for 192.7066ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.805396    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:00.805396    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:00.805396    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:00.805396    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:00.810355    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:00.810534    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:00.810534    7088 pod_ready.go:82] duration metric: took 394.6588ms for pod "kube-apiserver-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:00.810534    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.006505    7088 request.go:661] Waited for 195.9699ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m02
	I0407 13:08:01.006819    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m02
	I0407 13:08:01.006819    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.006819    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.006819    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.012204    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:01.205505    7088 request.go:661] Waited for 192.4647ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:01.205869    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:01.205869    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.205869    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.205869    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.210940    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:01.211378    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:01.211438    7088 pod_ready.go:82] duration metric: took 400.9019ms for pod "kube-apiserver-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.211438    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.405236    7088 request.go:661] Waited for 193.5187ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m03
	I0407 13:08:01.405236    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-573100-m03
	I0407 13:08:01.405236    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.405236    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.405236    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.411261    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:01.605662    7088 request.go:661] Waited for 193.3661ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:01.605662    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:01.605662    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.605662    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.605662    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.611067    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:01.611134    7088 pod_ready.go:93] pod "kube-apiserver-ha-573100-m03" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:01.611134    7088 pod_ready.go:82] duration metric: took 399.6941ms for pod "kube-apiserver-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.611134    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:01.805712    7088 request.go:661] Waited for 194.0364ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100
	I0407 13:08:01.806095    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100
	I0407 13:08:01.806261    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:01.806261    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:01.806261    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:01.815415    7088 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 13:08:02.005978    7088 request.go:661] Waited for 189.5201ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:02.005978    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:02.005978    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.005978    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.005978    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.011991    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:02.012305    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:02.012305    7088 pod_ready.go:82] duration metric: took 401.1692ms for pod "kube-controller-manager-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.012399    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.205408    7088 request.go:661] Waited for 192.937ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m02
	I0407 13:08:02.205834    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m02
	I0407 13:08:02.205834    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.205895    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.205895    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.210213    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:02.406308    7088 request.go:661] Waited for 196.0936ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:02.406308    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:02.406308    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.406850    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.406850    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.414526    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:02.414756    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:02.414756    7088 pod_ready.go:82] duration metric: took 402.3554ms for pod "kube-controller-manager-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.414756    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.606427    7088 request.go:661] Waited for 191.6705ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m03
	I0407 13:08:02.606427    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-573100-m03
	I0407 13:08:02.606427    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.606427    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.606427    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.612043    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:02.805953    7088 request.go:661] Waited for 193.437ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:02.806327    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:02.806408    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:02.806408    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:02.806408    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:02.811259    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:02.811656    7088 pod_ready.go:93] pod "kube-controller-manager-ha-573100-m03" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:02.811721    7088 pod_ready.go:82] duration metric: took 396.9632ms for pod "kube-controller-manager-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:02.811721    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-fgqm9" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.005905    7088 request.go:661] Waited for 194.0834ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgqm9
	I0407 13:08:03.005905    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-fgqm9
	I0407 13:08:03.005905    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.005905    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.005905    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.011391    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:03.206299    7088 request.go:661] Waited for 194.387ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:03.206299    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:03.206299    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.206299    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.206299    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.211731    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:03.212275    7088 pod_ready.go:93] pod "kube-proxy-fgqm9" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:03.212275    7088 pod_ready.go:82] duration metric: took 400.5519ms for pod "kube-proxy-fgqm9" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.212275    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-sxkgm" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.406727    7088 request.go:661] Waited for 194.2495ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxkgm
	I0407 13:08:03.407196    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-sxkgm
	I0407 13:08:03.407196    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.407196    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.407196    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.415711    7088 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0407 13:08:03.606913    7088 request.go:661] Waited for 191.2015ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:03.606913    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:03.606913    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.606913    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.606913    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.614891    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:03.615121    7088 pod_ready.go:93] pod "kube-proxy-sxkgm" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:03.615121    7088 pod_ready.go:82] duration metric: took 402.8445ms for pod "kube-proxy-sxkgm" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.615121    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-xsgf7" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:03.805393    7088 request.go:661] Waited for 190.2716ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsgf7
	I0407 13:08:03.805393    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xsgf7
	I0407 13:08:03.805393    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:03.805393    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:03.805393    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:03.810650    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:04.005951    7088 request.go:661] Waited for 194.6301ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:04.005951    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:04.005951    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.005951    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.005951    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.013001    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:04.013377    7088 pod_ready.go:93] pod "kube-proxy-xsgf7" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:04.013377    7088 pod_ready.go:82] duration metric: took 398.2546ms for pod "kube-proxy-xsgf7" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.013377    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.205615    7088 request.go:661] Waited for 191.7112ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100
	I0407 13:08:04.205615    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100
	I0407 13:08:04.205615    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.205615    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.205615    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.211655    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:04.406012    7088 request.go:661] Waited for 193.389ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:04.406012    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100
	I0407 13:08:04.406012    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.406012    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.406012    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.411154    7088 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 13:08:04.411516    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:04.411516    7088 pod_ready.go:82] duration metric: took 398.1366ms for pod "kube-scheduler-ha-573100" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.411658    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.605898    7088 request.go:661] Waited for 194.2394ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m02
	I0407 13:08:04.605898    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m02
	I0407 13:08:04.605898    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.605898    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.605898    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.611981    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:04.805667    7088 request.go:661] Waited for 193.4921ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:04.805667    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m02
	I0407 13:08:04.805667    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:04.805667    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:04.805667    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:04.814412    7088 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0407 13:08:04.815251    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:04.815318    7088 pod_ready.go:82] duration metric: took 403.6582ms for pod "kube-scheduler-ha-573100-m02" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:04.815376    7088 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:05.005274    7088 request.go:661] Waited for 189.8398ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m03
	I0407 13:08:05.005274    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-573100-m03
	I0407 13:08:05.005274    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.005274    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.005274    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.011398    7088 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 13:08:05.205349    7088 request.go:661] Waited for 193.2863ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:05.205794    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes/ha-573100-m03
	I0407 13:08:05.205853    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.205853    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.205853    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.210655    7088 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 13:08:05.210765    7088 pod_ready.go:93] pod "kube-scheduler-ha-573100-m03" in "kube-system" namespace has status "Ready":"True"
	I0407 13:08:05.210765    7088 pod_ready.go:82] duration metric: took 395.3878ms for pod "kube-scheduler-ha-573100-m03" in "kube-system" namespace to be "Ready" ...
	I0407 13:08:05.210765    7088 pod_ready.go:39] duration metric: took 5.1992208s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 13:08:05.210765    7088 api_server.go:52] waiting for apiserver process to appear ...
	I0407 13:08:05.219947    7088 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 13:08:05.250266    7088 api_server.go:72] duration metric: took 26.6969918s to wait for apiserver process to appear ...
	I0407 13:08:05.250266    7088 api_server.go:88] waiting for apiserver healthz status ...
	I0407 13:08:05.250431    7088 api_server.go:253] Checking apiserver healthz at https://172.17.95.223:8443/healthz ...
	I0407 13:08:05.257067    7088 api_server.go:279] https://172.17.95.223:8443/healthz returned 200:
	ok
	I0407 13:08:05.257924    7088 round_trippers.go:470] GET https://172.17.95.223:8443/version
	I0407 13:08:05.257924    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.258043    7088 round_trippers.go:480]     Accept: application/json, */*
	I0407 13:08:05.258043    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.260102    7088 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 13:08:05.260274    7088 api_server.go:141] control plane version: v1.32.2
	I0407 13:08:05.260326    7088 api_server.go:131] duration metric: took 10.0606ms to wait for apiserver health ...
	I0407 13:08:05.260326    7088 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 13:08:05.405360    7088 request.go:661] Waited for 144.8775ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:05.405360    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:05.405360    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.405360    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.405360    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.412374    7088 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 13:08:05.414612    7088 system_pods.go:59] 24 kube-system pods found
	I0407 13:08:05.414612    7088 system_pods.go:61] "coredns-668d6bf9bc-whpg2" [48faa3ce-0f1f-4c88-8298-15960d3c75a7] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "coredns-668d6bf9bc-z4nkw" [4aa968e7-d945-4f70-932d-b42417702382] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "etcd-ha-573100" [c473d0ab-e66d-4b41-ad43-edce5e371027] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "etcd-ha-573100-m02" [0f05d56b-d0f5-4505-9d54-127111d30d27] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "etcd-ha-573100-m03" [caa1f496-b332-4035-873f-dae22202edc5] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kindnet-fbm5f" [eccfc010-2f51-4693-92da-ce5e71254f88] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kindnet-fxxw5" [4fc9602a-d72f-4421-96a3-a7b0b35e2ce6] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kindnet-vhm9b" [355feff9-5819-4d85-82f0-2281fdcc5d5a] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kube-apiserver-ha-573100" [60830754-3b25-4753-9ec0-d9cef7b7b548] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kube-apiserver-ha-573100-m02" [5fa8bf0c-a2ff-4b0d-8e9f-a42172533517] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kube-apiserver-ha-573100-m03" [2bfa7e7c-87be-4015-b16d-fd6f41383fb1] Running
	I0407 13:08:05.414681    7088 system_pods.go:61] "kube-controller-manager-ha-573100" [0c4d6f0d-d4ae-40cd-bfa7-b7f39dff081e] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-controller-manager-ha-573100-m02" [cb31520b-fa77-4ceb-a798-c45f10c87d10] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-controller-manager-ha-573100-m03" [2e7deda2-f453-4c3f-b1b9-432cc370678a] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-proxy-fgqm9" [0033554f-f4b8-4c6a-8010-ace3b937df06] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-proxy-sxkgm" [6e0a6f3f-a949-4b95-aaaa-d74c1a7e0efe] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-proxy-xsgf7" [1bccfdb6-28f7-4190-a5a1-9316cfdf215e] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-scheduler-ha-573100" [d46211dc-ab95-474b-abfc-218808a4d1aa] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-scheduler-ha-573100-m02" [1fd3b48a-ef70-4cce-b7d4-24b44331bfba] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-scheduler-ha-573100-m03" [749f4ff2-a63f-4ae7-b6de-d1c2d83b20de] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-vip-ha-573100" [b8e24d1a-1309-482f-9734-99bcf4812448] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-vip-ha-573100-m02" [6e3ad003-a31a-49de-841f-2e21e31f094d] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "kube-vip-ha-573100-m03" [5c76fc47-39e4-487d-a74e-6583cf7fb3e9] Running
	I0407 13:08:05.414738    7088 system_pods.go:61] "storage-provisioner" [8d89f971-c575-4089-b12b-823fe7524dc2] Running
	I0407 13:08:05.414738    7088 system_pods.go:74] duration metric: took 154.4112ms to wait for pod list to return data ...
	I0407 13:08:05.414738    7088 default_sa.go:34] waiting for default service account to be created ...
	I0407 13:08:05.606541    7088 request.go:661] Waited for 191.8018ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/default/serviceaccounts
	I0407 13:08:05.606541    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/default/serviceaccounts
	I0407 13:08:05.606999    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.606999    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.606999    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.612556    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:05.612692    7088 default_sa.go:45] found service account: "default"
	I0407 13:08:05.612692    7088 default_sa.go:55] duration metric: took 197.9526ms for default service account to be created ...
	I0407 13:08:05.612811    7088 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 13:08:05.806330    7088 request.go:661] Waited for 193.4397ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:05.806562    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/namespaces/kube-system/pods
	I0407 13:08:05.806562    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:05.806562    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:05.806562    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:05.811854    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:05.814110    7088 system_pods.go:86] 24 kube-system pods found
	I0407 13:08:05.814383    7088 system_pods.go:89] "coredns-668d6bf9bc-whpg2" [48faa3ce-0f1f-4c88-8298-15960d3c75a7] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "coredns-668d6bf9bc-z4nkw" [4aa968e7-d945-4f70-932d-b42417702382] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "etcd-ha-573100" [c473d0ab-e66d-4b41-ad43-edce5e371027] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "etcd-ha-573100-m02" [0f05d56b-d0f5-4505-9d54-127111d30d27] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "etcd-ha-573100-m03" [caa1f496-b332-4035-873f-dae22202edc5] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kindnet-fbm5f" [eccfc010-2f51-4693-92da-ce5e71254f88] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kindnet-fxxw5" [4fc9602a-d72f-4421-96a3-a7b0b35e2ce6] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kindnet-vhm9b" [355feff9-5819-4d85-82f0-2281fdcc5d5a] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-apiserver-ha-573100" [60830754-3b25-4753-9ec0-d9cef7b7b548] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-apiserver-ha-573100-m02" [5fa8bf0c-a2ff-4b0d-8e9f-a42172533517] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-apiserver-ha-573100-m03" [2bfa7e7c-87be-4015-b16d-fd6f41383fb1] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-controller-manager-ha-573100" [0c4d6f0d-d4ae-40cd-bfa7-b7f39dff081e] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-controller-manager-ha-573100-m02" [cb31520b-fa77-4ceb-a798-c45f10c87d10] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-controller-manager-ha-573100-m03" [2e7deda2-f453-4c3f-b1b9-432cc370678a] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-proxy-fgqm9" [0033554f-f4b8-4c6a-8010-ace3b937df06] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-proxy-sxkgm" [6e0a6f3f-a949-4b95-aaaa-d74c1a7e0efe] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-proxy-xsgf7" [1bccfdb6-28f7-4190-a5a1-9316cfdf215e] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-scheduler-ha-573100" [d46211dc-ab95-474b-abfc-218808a4d1aa] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-scheduler-ha-573100-m02" [1fd3b48a-ef70-4cce-b7d4-24b44331bfba] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-scheduler-ha-573100-m03" [749f4ff2-a63f-4ae7-b6de-d1c2d83b20de] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-vip-ha-573100" [b8e24d1a-1309-482f-9734-99bcf4812448] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-vip-ha-573100-m02" [6e3ad003-a31a-49de-841f-2e21e31f094d] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "kube-vip-ha-573100-m03" [5c76fc47-39e4-487d-a74e-6583cf7fb3e9] Running
	I0407 13:08:05.814383    7088 system_pods.go:89] "storage-provisioner" [8d89f971-c575-4089-b12b-823fe7524dc2] Running
	I0407 13:08:05.814383    7088 system_pods.go:126] duration metric: took 201.5712ms to wait for k8s-apps to be running ...
	I0407 13:08:05.814383    7088 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 13:08:05.825526    7088 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 13:08:05.848747    7088 system_svc.go:56] duration metric: took 34.3638ms WaitForService to wait for kubelet
	I0407 13:08:05.848747    7088 kubeadm.go:582] duration metric: took 27.2954707s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:08:05.848812    7088 node_conditions.go:102] verifying NodePressure condition ...
	I0407 13:08:06.006134    7088 request.go:661] Waited for 157.1876ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.95.223:8443/api/v1/nodes
	I0407 13:08:06.006134    7088 round_trippers.go:470] GET https://172.17.95.223:8443/api/v1/nodes
	I0407 13:08:06.006134    7088 round_trippers.go:476] Request Headers:
	I0407 13:08:06.006134    7088 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 13:08:06.006134    7088 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 13:08:06.012010    7088 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 13:08:06.012010    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:08:06.012616    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:08:06.012616    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:08:06.012616    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:08:06.012616    7088 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 13:08:06.012616    7088 node_conditions.go:123] node cpu capacity is 2
	I0407 13:08:06.012616    7088 node_conditions.go:105] duration metric: took 163.8027ms to run NodePressure ...
	I0407 13:08:06.012726    7088 start.go:241] waiting for startup goroutines ...
	I0407 13:08:06.012726    7088 start.go:255] writing updated cluster config ...
	I0407 13:08:06.024480    7088 ssh_runner.go:195] Run: rm -f paused
	I0407 13:08:06.166146    7088 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 13:08:06.171363    7088 out.go:177] * Done! kubectl is now configured to use "ha-573100" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.835447784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.919154667Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.919286468Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.919303868Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:25 ha-573100 dockerd[1465]: time="2025-04-07T13:00:25.939051282Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:26 ha-573100 cri-dockerd[1356]: time="2025-04-07T13:00:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/445e1a78b6431a0d71140de96c13a77c9d52d9223e948af86963ba710b439534/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 13:00:26 ha-573100 cri-dockerd[1356]: time="2025-04-07T13:00:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b5e4d570b4f2c2584d899cda49b00a4d1370c51ee3637f62bd43b148d44abf06/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.505815051Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.505957952Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.506067752Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.508339068Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.581174972Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.581299872Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.581312273Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:00:26 ha-573100 dockerd[1465]: time="2025-04-07T13:00:26.581476074Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:08:45 ha-573100 dockerd[1465]: time="2025-04-07T13:08:45.095486781Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:08:45 ha-573100 dockerd[1465]: time="2025-04-07T13:08:45.095629381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:08:45 ha-573100 dockerd[1465]: time="2025-04-07T13:08:45.095648281Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:08:45 ha-573100 dockerd[1465]: time="2025-04-07T13:08:45.095920882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:08:45 ha-573100 cri-dockerd[1356]: time="2025-04-07T13:08:45Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b6426d1240b751896b681cc7894d8a9bafa41a6d27f50fe9a91982928cecea31/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 07 13:08:47 ha-573100 cri-dockerd[1356]: time="2025-04-07T13:08:47Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 07 13:08:47 ha-573100 dockerd[1465]: time="2025-04-07T13:08:47.520545882Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 13:08:47 ha-573100 dockerd[1465]: time="2025-04-07T13:08:47.521400590Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 13:08:47 ha-573100 dockerd[1465]: time="2025-04-07T13:08:47.521568792Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 13:08:47 ha-573100 dockerd[1465]: time="2025-04-07T13:08:47.521673093Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	06b5f6c977b06       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   18 minutes ago      Running             busybox                   0                   b6426d1240b75       busybox-58667487b6-tj2cw
	a02d067ca0257       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   b5e4d570b4f2c       coredns-668d6bf9bc-whpg2
	b26f43fa5c1ed       c69fa2e9cbf5f                                                                                         26 minutes ago      Running             coredns                   0                   445e1a78b6431       coredns-668d6bf9bc-z4nkw
	61fc0b71fca43       6e38f40d628db                                                                                         26 minutes ago      Running             storage-provisioner       0                   974692fdcacb6       storage-provisioner
	ff53930de566d       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              27 minutes ago      Running             kindnet-cni               0                   23727db8abce4       kindnet-vhm9b
	0c75c161a6626       f1332858868e1                                                                                         27 minutes ago      Running             kube-proxy                0                   65b0ccd3b332a       kube-proxy-xsgf7
	31ba7c7d935d0       ghcr.io/kube-vip/kube-vip@sha256:e01c90bcdd3eb37a46aaf04f6c86cca3e66dd0db7a231f3c8e8aa105635c158a     27 minutes ago      Running             kube-vip                  0                   f928ea89ee802       kube-vip-ha-573100
	bad0116ca1089       d8e673e7c9983                                                                                         27 minutes ago      Running             kube-scheduler            0                   6b7e896091c3e       kube-scheduler-ha-573100
	bba5768a9eb4d       85b7a174738ba                                                                                         27 minutes ago      Running             kube-apiserver            0                   b7c20f9e9ccc7       kube-apiserver-ha-573100
	9dc6d594af6db       b6a454c5a800d                                                                                         27 minutes ago      Running             kube-controller-manager   0                   3f1e795485f06       kube-controller-manager-ha-573100
	0ad6c1a3c3233       a9e7e6b294baf                                                                                         27 minutes ago      Running             etcd                      0                   8094c085641d0       etcd-ha-573100
	
	
	==> coredns [a02d067ca025] <==
	[INFO] plugin/reload: Running configuration SHA512 = 52f38634f47d27a60a843ea08b564c25eb754b24bbf06ec66f8366b52e126543ce16cee7cc062958162af0c89604123ac00e3f032b67ea2f0f7eb90c30818844
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:59124 - 48887 "HINFO IN 1279938540662885478.338108461797422407. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.083715361s
	[INFO] 10.244.1.2:51285 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd 60 0.100440479s
	[INFO] 10.244.2.2:46629 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.06263721s
	[INFO] 10.244.0.4:47614 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 1.247363626s
	[INFO] 10.244.1.2:43452 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.011375511s
	[INFO] 10.244.1.2:33255 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000109901s
	[INFO] 10.244.1.2:32770 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000301403s
	[INFO] 10.244.2.2:45044 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.006079259s
	[INFO] 10.244.2.2:39530 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000107401s
	[INFO] 10.244.2.2:55103 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000218902s
	[INFO] 10.244.2.2:34607 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000103001s
	[INFO] 10.244.0.4:42722 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000148502s
	[INFO] 10.244.0.4:48147 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000304603s
	[INFO] 10.244.1.2:55020 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000348003s
	[INFO] 10.244.2.2:54788 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000179002s
	[INFO] 10.244.0.4:52200 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000228203s
	[INFO] 10.244.0.4:44262 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000213903s
	[INFO] 10.244.0.4:44282 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000168102s
	[INFO] 10.244.1.2:56283 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000237302s
	[INFO] 10.244.2.2:47768 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000092101s
	[INFO] 10.244.0.4:53940 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000291803s
	[INFO] 10.244.0.4:46900 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000125701s
	
	
	==> coredns [b26f43fa5c1e] <==
	[INFO] 10.244.2.2:49769 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000183502s
	[INFO] 10.244.2.2:46536 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000179701s
	[INFO] 10.244.2.2:33107 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000107401s
	[INFO] 10.244.2.2:41774 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000107402s
	[INFO] 10.244.0.4:46869 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000305903s
	[INFO] 10.244.0.4:41513 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000347403s
	[INFO] 10.244.0.4:57173 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.029613287s
	[INFO] 10.244.0.4:58270 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000119901s
	[INFO] 10.244.0.4:33927 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000186402s
	[INFO] 10.244.0.4:44042 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000079301s
	[INFO] 10.244.1.2:43651 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000204102s
	[INFO] 10.244.1.2:49283 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000189302s
	[INFO] 10.244.1.2:52162 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.0000747s
	[INFO] 10.244.2.2:57433 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000136201s
	[INFO] 10.244.2.2:51627 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000212203s
	[INFO] 10.244.2.2:32807 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000069201s
	[INFO] 10.244.0.4:54052 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155801s
	[INFO] 10.244.1.2:54124 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000189602s
	[INFO] 10.244.1.2:58803 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000187501s
	[INFO] 10.244.1.2:46708 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000207902s
	[INFO] 10.244.2.2:36414 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000221503s
	[INFO] 10.244.2.2:35259 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000222302s
	[INFO] 10.244.2.2:35502 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000108201s
	[INFO] 10.244.0.4:32882 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000233503s
	[INFO] 10.244.0.4:33670 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000127102s
	
	
	==> describe nodes <==
	Name:               ha-573100
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-573100
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=ha-573100
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T13_00_00_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 12:59:56 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-573100
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:27:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:26:20 +0000   Mon, 07 Apr 2025 12:59:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:26:20 +0000   Mon, 07 Apr 2025 12:59:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:26:20 +0000   Mon, 07 Apr 2025 12:59:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:26:20 +0000   Mon, 07 Apr 2025 13:00:23 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.95.223
	  Hostname:    ha-573100
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 7bf27969232e484f9333d5db0fe4ff8e
	  System UUID:                a244b224-8deb-a04f-b638-26a3468cc88e
	  Boot ID:                    b7643801-8375-43e7-a33f-969d88d1e272
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-tj2cw             0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 coredns-668d6bf9bc-whpg2             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 coredns-668d6bf9bc-z4nkw             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-ha-573100                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         27m
	  kube-system                 kindnet-vhm9b                        100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-ha-573100             250m (12%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-controller-manager-ha-573100    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-xsgf7                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-ha-573100             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-vip-ha-573100                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%)   100m (5%)
	  memory             290Mi (13%)  390Mi (18%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 27m                kube-proxy       
	  Normal  NodeHasSufficientPID     27m (x7 over 27m)  kubelet          Node ha-573100 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  27m (x8 over 27m)  kubelet          Node ha-573100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m (x8 over 27m)  kubelet          Node ha-573100 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 27m                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  27m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  27m                kubelet          Node ha-573100 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    27m                kubelet          Node ha-573100 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     27m                kubelet          Node ha-573100 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           27m                node-controller  Node ha-573100 event: Registered Node ha-573100 in Controller
	  Normal  NodeReady                26m                kubelet          Node ha-573100 status is now: NodeReady
	  Normal  RegisteredNode           23m                node-controller  Node ha-573100 event: Registered Node ha-573100 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-573100 event: Registered Node ha-573100 in Controller
	
	
	Name:               ha-573100-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-573100-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=ha-573100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_07T13_03_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:03:33 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-573100-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:26:12 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Apr 2025 13:25:30 +0000   Mon, 07 Apr 2025 13:27:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Apr 2025 13:25:30 +0000   Mon, 07 Apr 2025 13:27:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Apr 2025 13:25:30 +0000   Mon, 07 Apr 2025 13:27:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Apr 2025 13:25:30 +0000   Mon, 07 Apr 2025 13:27:03 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.82.162
	  Hostname:    ha-573100-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 16d352d015944a28a8cbdb5d22377f2b
	  System UUID:                20eb10cf-52ab-d249-9c07-7fd1050910cc
	  Boot ID:                    8202eddd-270b-492a-bed7-b8635542d451
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-gtkbk                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-573100-m02                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         23m
	  kube-system                 kindnet-fxxw5                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      23m
	  kube-system                 kube-apiserver-ha-573100-m02             250m (12%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-controller-manager-ha-573100-m02    200m (10%)    0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-proxy-sxkgm                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-scheduler-ha-573100-m02             100m (5%)     0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kube-vip-ha-573100-m02                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  23m (x8 over 23m)  kubelet          Node ha-573100-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m (x8 over 23m)  kubelet          Node ha-573100-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m (x7 over 23m)  kubelet          Node ha-573100-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           23m                node-controller  Node ha-573100-m02 event: Registered Node ha-573100-m02 in Controller
	  Normal  RegisteredNode           23m                node-controller  Node ha-573100-m02 event: Registered Node ha-573100-m02 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-573100-m02 event: Registered Node ha-573100-m02 in Controller
	  Normal  NodeNotReady             14s                node-controller  Node ha-573100-m02 status is now: NodeNotReady
	
	
	Name:               ha-573100-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-573100-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=ha-573100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_07T13_07_38_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:07:31 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-573100-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:26:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:22:08 +0000   Mon, 07 Apr 2025 13:07:31 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:22:08 +0000   Mon, 07 Apr 2025 13:07:31 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:22:08 +0000   Mon, 07 Apr 2025 13:07:31 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:22:08 +0000   Mon, 07 Apr 2025 13:07:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.94.27
	  Hostname:    ha-573100-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 b3278c56cde744459d2e0158ad4d0d5d
	  System UUID:                9ab42d67-ccec-6745-a537-30243250ed15
	  Boot ID:                    8a3ff060-2959-44eb-8288-f4f7d48d3c5c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-szx9k                 0 (0%)        0 (0%)      0 (0%)           0 (0%)         18m
	  kube-system                 etcd-ha-573100-m03                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         19m
	  kube-system                 kindnet-fbm5f                            100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-apiserver-ha-573100-m03             250m (12%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-controller-manager-ha-573100-m03    200m (10%)    0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-proxy-fgqm9                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-scheduler-ha-573100-m03             100m (5%)     0 (0%)      0 (0%)           0 (0%)         19m
	  kube-system                 kube-vip-ha-573100-m03                   0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  100m (5%)
	  memory             150Mi (7%)  50Mi (2%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 19m                kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x8 over 19m)  kubelet          Node ha-573100-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x8 over 19m)  kubelet          Node ha-573100-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x7 over 19m)  kubelet          Node ha-573100-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           19m                node-controller  Node ha-573100-m03 event: Registered Node ha-573100-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-573100-m03 event: Registered Node ha-573100-m03 in Controller
	  Normal  RegisteredNode           19m                node-controller  Node ha-573100-m03 event: Registered Node ha-573100-m03 in Controller
	
	
	Name:               ha-573100-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-573100-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=ha-573100
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_07T13_13_06_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:13:05 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-573100-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 13:27:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 13:22:37 +0000   Mon, 07 Apr 2025 13:13:05 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 13:22:37 +0000   Mon, 07 Apr 2025 13:13:05 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 13:22:37 +0000   Mon, 07 Apr 2025 13:13:05 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 13:22:37 +0000   Mon, 07 Apr 2025 13:13:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.90.114
	  Hostname:    ha-573100-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 dc41ac0e5de94d6ba7fb194095368dcb
	  System UUID:                60d3eb6a-c6c8-6741-a7dc-9c05d6744898
	  Boot ID:                    6c449262-4e18-4c87-b264-8213744d221a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-cr45l       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      14m
	  kube-system                 kube-proxy-6scp5    0 (0%)        0 (0%)      0 (0%)           0 (0%)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 14m                kube-proxy       
	  Normal  NodeHasSufficientMemory  14m (x2 over 14m)  kubelet          Node ha-573100-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m (x2 over 14m)  kubelet          Node ha-573100-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m (x2 over 14m)  kubelet          Node ha-573100-m04 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  14m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14m                node-controller  Node ha-573100-m04 event: Registered Node ha-573100-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-573100-m04 event: Registered Node ha-573100-m04 in Controller
	  Normal  RegisteredNode           14m                node-controller  Node ha-573100-m04 event: Registered Node ha-573100-m04 in Controller
	  Normal  NodeReady                13m                kubelet          Node ha-573100-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr 7 12:58] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +47.727213] systemd-fstab-generator[651]: Ignoring "noauto" option for root device
	[  +0.181165] systemd-fstab-generator[663]: Ignoring "noauto" option for root device
	[Apr 7 12:59] systemd-fstab-generator[1020]: Ignoring "noauto" option for root device
	[  +0.087809] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.492769] systemd-fstab-generator[1058]: Ignoring "noauto" option for root device
	[  +0.203876] systemd-fstab-generator[1070]: Ignoring "noauto" option for root device
	[  +0.216509] systemd-fstab-generator[1085]: Ignoring "noauto" option for root device
	[  +2.835051] systemd-fstab-generator[1309]: Ignoring "noauto" option for root device
	[  +0.177472] systemd-fstab-generator[1321]: Ignoring "noauto" option for root device
	[  +0.177083] systemd-fstab-generator[1334]: Ignoring "noauto" option for root device
	[  +0.262900] systemd-fstab-generator[1348]: Ignoring "noauto" option for root device
	[ +11.578949] systemd-fstab-generator[1450]: Ignoring "noauto" option for root device
	[  +0.110688] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.501132] systemd-fstab-generator[1716]: Ignoring "noauto" option for root device
	[  +6.236247] systemd-fstab-generator[1861]: Ignoring "noauto" option for root device
	[  +0.093827] kauditd_printk_skb: 74 callbacks suppressed
	[  +5.412978] kauditd_printk_skb: 67 callbacks suppressed
	[  +3.605904] systemd-fstab-generator[2382]: Ignoring "noauto" option for root device
	[Apr 7 13:00] kauditd_printk_skb: 17 callbacks suppressed
	[  +7.314204] kauditd_printk_skb: 29 callbacks suppressed
	[Apr 7 13:03] kauditd_printk_skb: 26 callbacks suppressed
	[Apr 7 13:13] hrtimer: interrupt took 4441033 ns
	
	
	==> etcd [0ad6c1a3c323] <==
	{"level":"warn","ts":"2025-04-07T13:27:17.204981Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.223173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.233727Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.241848Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.249075Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.259497Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.268231Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.274144Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.278859Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.284778Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.297670Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.305599Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.305924Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.307800Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.312169Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.315985Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.323293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.331539Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.339672Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.345049Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.348646Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.353412Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.362390Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.370053Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2025-04-07T13:27:17.405625Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"926100769fc6b980","from":"926100769fc6b980","remote-peer-id":"84faccb7a9db49e7","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 13:27:17 up 29 min,  0 users,  load average: 0.44, 0.40, 0.32
	Linux ha-573100 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [ff53930de566] <==
	I0407 13:26:43.331847       1 main.go:301] handling current node
	I0407 13:26:53.334179       1 main.go:297] Handling node with IPs: map[172.17.90.114:{}]
	I0407 13:26:53.334287       1 main.go:324] Node ha-573100-m04 has CIDR [10.244.3.0/24] 
	I0407 13:26:53.334721       1 main.go:297] Handling node with IPs: map[172.17.95.223:{}]
	I0407 13:26:53.334812       1 main.go:301] handling current node
	I0407 13:26:53.334840       1 main.go:297] Handling node with IPs: map[172.17.82.162:{}]
	I0407 13:26:53.334853       1 main.go:324] Node ha-573100-m02 has CIDR [10.244.1.0/24] 
	I0407 13:26:53.335308       1 main.go:297] Handling node with IPs: map[172.17.94.27:{}]
	I0407 13:26:53.335474       1 main.go:324] Node ha-573100-m03 has CIDR [10.244.2.0/24] 
	I0407 13:27:03.335388       1 main.go:297] Handling node with IPs: map[172.17.94.27:{}]
	I0407 13:27:03.335494       1 main.go:324] Node ha-573100-m03 has CIDR [10.244.2.0/24] 
	I0407 13:27:03.336033       1 main.go:297] Handling node with IPs: map[172.17.90.114:{}]
	I0407 13:27:03.336188       1 main.go:324] Node ha-573100-m04 has CIDR [10.244.3.0/24] 
	I0407 13:27:03.336473       1 main.go:297] Handling node with IPs: map[172.17.95.223:{}]
	I0407 13:27:03.336646       1 main.go:301] handling current node
	I0407 13:27:03.336755       1 main.go:297] Handling node with IPs: map[172.17.82.162:{}]
	I0407 13:27:03.336767       1 main.go:324] Node ha-573100-m02 has CIDR [10.244.1.0/24] 
	I0407 13:27:13.331931       1 main.go:297] Handling node with IPs: map[172.17.95.223:{}]
	I0407 13:27:13.331971       1 main.go:301] handling current node
	I0407 13:27:13.331992       1 main.go:297] Handling node with IPs: map[172.17.82.162:{}]
	I0407 13:27:13.331998       1 main.go:324] Node ha-573100-m02 has CIDR [10.244.1.0/24] 
	I0407 13:27:13.332206       1 main.go:297] Handling node with IPs: map[172.17.94.27:{}]
	I0407 13:27:13.332215       1 main.go:324] Node ha-573100-m03 has CIDR [10.244.2.0/24] 
	I0407 13:27:13.332493       1 main.go:297] Handling node with IPs: map[172.17.90.114:{}]
	I0407 13:27:13.332503       1 main.go:324] Node ha-573100-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [bba5768a9eb4] <==
	I0407 12:59:57.983721       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 12:59:58.581732       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 12:59:59.237353       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 12:59:59.259824       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0407 12:59:59.275422       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 13:00:03.890771       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0407 13:00:04.033269       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0407 13:07:31.933625       1 timeout.go:140] "Post-timeout activity" logger="UnhandledError" timeElapsed="7µs" method="PATCH" path="/api/v1/namespaces/default/events/ha-573100-m03.18340b2eb74566b8" result=null
	E0407 13:07:31.933798       1 finisher.go:175] "Unhandled Error" err="FinishRequest: post-timeout activity - time-elapsed: 26.1µs, panicked: false, err: context canceled, panic-reason: <nil>" logger="UnhandledError"
	E0407 13:07:31.937967       1 wrap.go:53] "Timeout or abort while handling" logger="UnhandledError" method="PATCH" URI="/api/v1/namespaces/default/events/ha-573100-m03.18340b2eb74566b8" auditID="17c35ea3-97a7-4486-b4a9-7e57e93a5e49"
	E0407 13:08:52.418923       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55010: use of closed network connection
	E0407 13:08:53.028768       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55012: use of closed network connection
	E0407 13:08:54.834355       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55014: use of closed network connection
	E0407 13:08:55.507349       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55016: use of closed network connection
	E0407 13:08:56.041504       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55018: use of closed network connection
	E0407 13:08:56.584069       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55021: use of closed network connection
	E0407 13:08:57.096494       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55023: use of closed network connection
	E0407 13:08:57.619763       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55025: use of closed network connection
	E0407 13:08:58.162862       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55027: use of closed network connection
	E0407 13:08:59.080617       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55030: use of closed network connection
	E0407 13:09:09.593391       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55032: use of closed network connection
	E0407 13:09:10.114044       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55034: use of closed network connection
	E0407 13:09:20.603647       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55036: use of closed network connection
	E0407 13:09:21.090186       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55039: use of closed network connection
	E0407 13:09:31.580893       1 conn.go:339] Error on socket receive: read tcp 172.17.95.254:8443->172.17.80.1:55041: use of closed network connection
	
	
	==> kube-controller-manager [9dc6d594af6d] <==
	I0407 13:13:10.788834       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m04"
	I0407 13:13:10.879939       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m04"
	I0407 13:13:15.810984       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m04"
	I0407 13:13:34.598140       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m04"
	I0407 13:13:34.604205       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-573100-m04"
	I0407 13:13:34.618867       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m04"
	I0407 13:13:35.818928       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m04"
	I0407 13:13:36.272288       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m04"
	I0407 13:15:19.281909       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m02"
	I0407 13:16:07.996525       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100"
	I0407 13:17:02.479641       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m03"
	I0407 13:17:30.999780       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m04"
	I0407 13:20:23.924943       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m02"
	I0407 13:21:14.219791       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100"
	I0407 13:22:08.496310       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m03"
	I0407 13:22:37.092694       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m04"
	I0407 13:25:30.292708       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m02"
	I0407 13:26:20.271638       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100"
	I0407 13:27:03.634263       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m02"
	I0407 13:27:03.641260       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-573100-m04"
	I0407 13:27:03.687625       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m02"
	I0407 13:27:03.872470       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="134.179485ms"
	I0407 13:27:03.872629       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="42.5µs"
	I0407 13:27:04.087509       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m02"
	I0407 13:27:08.997399       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="ha-573100-m02"
	
	
	==> kube-proxy [0c75c161a662] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 13:00:05.730778       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 13:00:05.789031       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.17.95.223"]
	E0407 13:00:05.789676       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 13:00:05.849207       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 13:00:05.849339       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 13:00:05.849373       1 server_linux.go:170] "Using iptables Proxier"
	I0407 13:00:05.854013       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 13:00:05.857233       1 server.go:497] "Version info" version="v1.32.2"
	I0407 13:00:05.857273       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 13:00:05.870851       1 config.go:199] "Starting service config controller"
	I0407 13:00:05.870878       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 13:00:05.871069       1 config.go:105] "Starting endpoint slice config controller"
	I0407 13:00:05.871182       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 13:00:05.876312       1 config.go:329] "Starting node config controller"
	I0407 13:00:05.876398       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 13:00:05.972226       1 shared_informer.go:320] Caches are synced for service config
	I0407 13:00:05.972226       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 13:00:05.976621       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [bad0116ca108] <==
	W0407 12:59:57.161350       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0407 12:59:57.161455       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:59:57.186004       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0407 12:59:57.186050       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 12:59:57.260831       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 12:59:57.260872       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	I0407 12:59:59.775675       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0407 13:07:31.323960       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-nwwzq\": pod kube-proxy-nwwzq is already assigned to node \"ha-573100-m03\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-nwwzq" node="ha-573100-m03"
	E0407 13:07:31.325584       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-fbm5f\": pod kindnet-fbm5f is already assigned to node \"ha-573100-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-fbm5f" node="ha-573100-m03"
	E0407 13:07:31.330201       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod eccfc010-2f51-4693-92da-ce5e71254f88(kube-system/kindnet-fbm5f) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-fbm5f"
	E0407 13:07:31.332517       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-fbm5f\": pod kindnet-fbm5f is already assigned to node \"ha-573100-m03\"" pod="kube-system/kindnet-fbm5f"
	I0407 13:07:31.332551       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-fbm5f" node="ha-573100-m03"
	E0407 13:07:31.330273       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod d49c5523-bbfb-495b-bba3-b60a86f646fb(kube-system/kube-proxy-nwwzq) wasn't assumed so cannot be forgotten" pod="kube-system/kube-proxy-nwwzq"
	E0407 13:07:31.334602       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-nwwzq\": pod kube-proxy-nwwzq is already assigned to node \"ha-573100-m03\"" pod="kube-system/kube-proxy-nwwzq"
	I0407 13:07:31.334995       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-nwwzq" node="ha-573100-m03"
	E0407 13:07:31.323946       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-q89td\": pod kindnet-q89td is already assigned to node \"ha-573100-m03\"" plugin="DefaultBinder" pod="kube-system/kindnet-q89td" node="ha-573100-m03"
	E0407 13:07:31.335491       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod e32634bd-0382-43ca-bf16-77fe3f5b7fef(kube-system/kindnet-q89td) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-q89td"
	E0407 13:07:31.335565       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-q89td\": pod kindnet-q89td is already assigned to node \"ha-573100-m03\"" pod="kube-system/kindnet-q89td"
	I0407 13:07:31.335665       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-q89td" node="ha-573100-m03"
	E0407 13:13:05.737588       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-fp2zd\": pod kube-proxy-fp2zd is already assigned to node \"ha-573100-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-fp2zd" node="ha-573100-m04"
	E0407 13:13:05.738138       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-fp2zd\": pod kube-proxy-fp2zd is already assigned to node \"ha-573100-m04\"" pod="kube-system/kube-proxy-fp2zd"
	E0407 13:13:05.738991       1 framework.go:1316] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-lbjxj\": pod kindnet-lbjxj is already assigned to node \"ha-573100-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-lbjxj" node="ha-573100-m04"
	E0407 13:13:05.745043       1 schedule_one.go:359] "scheduler cache ForgetPod failed" err="pod 9b556bd7-c26e-4251-8df4-452e92ff5580(kube-system/kindnet-lbjxj) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-lbjxj"
	E0407 13:13:05.745122       1 schedule_one.go:1058] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-lbjxj\": pod kindnet-lbjxj is already assigned to node \"ha-573100-m04\"" pod="kube-system/kindnet-lbjxj"
	I0407 13:13:05.745146       1 schedule_one.go:1071] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-lbjxj" node="ha-573100-m04"
	
	
	==> kubelet <==
	Apr 07 13:22:59 ha-573100 kubelet[2389]: E0407 13:22:59.318343    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 13:22:59 ha-573100 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 13:22:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:22:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:22:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 13:23:59 ha-573100 kubelet[2389]: E0407 13:23:59.317459    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 13:23:59 ha-573100 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 13:23:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:23:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:23:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 13:24:59 ha-573100 kubelet[2389]: E0407 13:24:59.317981    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 13:24:59 ha-573100 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 13:24:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:24:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:24:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 13:25:59 ha-573100 kubelet[2389]: E0407 13:25:59.318008    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 13:25:59 ha-573100 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 13:25:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:25:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:25:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 13:26:59 ha-573100 kubelet[2389]: E0407 13:26:59.320053    2389 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 13:26:59 ha-573100 kubelet[2389]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 13:26:59 ha-573100 kubelet[2389]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 13:26:59 ha-573100 kubelet[2389]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 13:26:59 ha-573100 kubelet[2389]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-573100 -n ha-573100
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p ha-573100 -n ha-573100: (12.4657167s)
helpers_test.go:261: (dbg) Run:  kubectl --context ha-573100 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (92.84s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (58.44s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-kt4sh -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-kt4sh -- sh -c "ping -c 1 172.17.80.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-kt4sh -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.4872258s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.80.1) from pod (busybox-58667487b6-kt4sh): exit status 1
multinode_test.go:572: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-vgl84 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-vgl84 -- sh -c "ping -c 1 172.17.80.1"
multinode_test.go:583: (dbg) Non-zero exit: out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-vgl84 -- sh -c "ping -c 1 172.17.80.1": exit status 1 (10.490362s)

                                                
                                                
-- stdout --
	PING 172.17.80.1 (172.17.80.1): 56 data bytes
	
	--- 172.17.80.1 ping statistics ---
	1 packets transmitted, 0 packets received, 100% packet loss

                                                
                                                
-- /stdout --
** stderr ** 
	command terminated with exit code 1

                                                
                                                
** /stderr **
multinode_test.go:584: Failed to ping host (172.17.80.1) from pod (busybox-58667487b6-vgl84): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-140200 -n multinode-140200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-140200 -n multinode-140200: (12.0833198s)
helpers_test.go:244: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/PingHostFrom2Pods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 logs -n 25: (8.7445918s)
helpers_test.go:252: TestMultiNode/serial/PingHostFrom2Pods logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |                       Args                        |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | mount-start-2-007800 ssh -- ls                    | mount-start-2-007800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:53 UTC | 07 Apr 25 13:53 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-1-007800                           | mount-start-1-007800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:53 UTC | 07 Apr 25 13:53 UTC |
	|         | --alsologtostderr -v=5                            |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-007800 ssh -- ls                    | mount-start-2-007800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:53 UTC | 07 Apr 25 13:53 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| stop    | -p mount-start-2-007800                           | mount-start-2-007800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:53 UTC | 07 Apr 25 13:54 UTC |
	| start   | -p mount-start-2-007800                           | mount-start-2-007800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:54 UTC | 07 Apr 25 13:56 UTC |
	| mount   | C:\Users\jenkins.minikube3:/minikube-host         | mount-start-2-007800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:56 UTC |                     |
	|         | --profile mount-start-2-007800 --v 0              |                      |                   |         |                     |                     |
	|         | --9p-version 9p2000.L --gid 0 --ip                |                      |                   |         |                     |                     |
	|         | --msize 6543 --port 46465 --type 9p --uid         |                      |                   |         |                     |                     |
	|         |                                                 0 |                      |                   |         |                     |                     |
	| ssh     | mount-start-2-007800 ssh -- ls                    | mount-start-2-007800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:56 UTC | 07 Apr 25 13:56 UTC |
	|         | /minikube-host                                    |                      |                   |         |                     |                     |
	| delete  | -p mount-start-2-007800                           | mount-start-2-007800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:56 UTC | 07 Apr 25 13:56 UTC |
	| delete  | -p mount-start-1-007800                           | mount-start-1-007800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:56 UTC | 07 Apr 25 13:56 UTC |
	| start   | -p multinode-140200                               | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 13:56 UTC | 07 Apr 25 14:03 UTC |
	|         | --wait=true --memory=2200                         |                      |                   |         |                     |                     |
	|         | --nodes=2 -v=8                                    |                      |                   |         |                     |                     |
	|         | --alsologtostderr                                 |                      |                   |         |                     |                     |
	|         | --driver=hyperv                                   |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- apply -f                   | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | ./testdata/multinodes/multinode-pod-dns-test.yaml |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- rollout                    | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | status deployment/busybox                         |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- get pods -o                | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | jsonpath='{.items[*].status.podIP}'               |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- get pods -o                | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | busybox-58667487b6-kt4sh --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | busybox-58667487b6-vgl84 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.io                            |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | busybox-58667487b6-kt4sh --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | busybox-58667487b6-vgl84 --                       |                      |                   |         |                     |                     |
	|         | nslookup kubernetes.default                       |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | busybox-58667487b6-kt4sh -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | busybox-58667487b6-vgl84 -- nslookup              |                      |                   |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- get pods -o                | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | jsonpath='{.items[*].metadata.name}'              |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | busybox-58667487b6-kt4sh                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC |                     |
	|         | busybox-58667487b6-kt4sh -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1                          |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC | 07 Apr 25 14:04 UTC |
	|         | busybox-58667487b6-vgl84                          |                      |                   |         |                     |                     |
	|         | -- sh -c nslookup                                 |                      |                   |         |                     |                     |
	|         | host.minikube.internal | awk                      |                      |                   |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3                           |                      |                   |         |                     |                     |
	| kubectl | -p multinode-140200 -- exec                       | multinode-140200     | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:04 UTC |                     |
	|         | busybox-58667487b6-vgl84 -- sh                    |                      |                   |         |                     |                     |
	|         | -c ping -c 1 172.17.80.1                          |                      |                   |         |                     |                     |
	|---------|---------------------------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 13:56:56
	Running on machine: minikube3
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 13:56:56.803087    9720 out.go:345] Setting OutFile to fd 1424 ...
	I0407 13:56:56.879084    9720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:56:56.879084    9720 out.go:358] Setting ErrFile to fd 876...
	I0407 13:56:56.879084    9720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 13:56:56.900115    9720 out.go:352] Setting JSON to false
	I0407 13:56:56.903744    9720 start.go:129] hostinfo: {"hostname":"minikube3","uptime":6009,"bootTime":1744028207,"procs":177,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 13:56:56.903744    9720 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 13:56:56.909331    9720 out.go:177] * [multinode-140200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 13:56:56.913297    9720 notify.go:220] Checking for updates...
	I0407 13:56:56.915718    9720 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 13:56:56.920473    9720 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 13:56:56.925621    9720 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 13:56:56.927294    9720 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 13:56:56.930631    9720 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 13:56:56.935553    9720 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:56:56.935553    9720 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 13:57:02.248688    9720 out.go:177] * Using the hyperv driver based on user configuration
	I0407 13:57:02.253709    9720 start.go:297] selected driver: hyperv
	I0407 13:57:02.253709    9720 start.go:901] validating driver "hyperv" against <nil>
	I0407 13:57:02.253709    9720 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 13:57:02.301365    9720 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 13:57:02.302912    9720 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 13:57:02.303646    9720 cni.go:84] Creating CNI manager for ""
	I0407 13:57:02.303646    9720 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0407 13:57:02.303646    9720 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0407 13:57:02.303646    9720 start.go:340] cluster config:
	{Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerR
untime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: Stat
icIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:57:02.304167    9720 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 13:57:02.307781    9720 out.go:177] * Starting "multinode-140200" primary control-plane node in "multinode-140200" cluster
	I0407 13:57:02.312332    9720 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:57:02.312332    9720 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 13:57:02.312728    9720 cache.go:56] Caching tarball of preloaded images
	I0407 13:57:02.312816    9720 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 13:57:02.312816    9720 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 13:57:02.313529    9720 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 13:57:02.313678    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json: {Name:mkb5df6b12543f538d6ce7a2ee475a476fc54b7a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:57:02.313911    9720 start.go:360] acquireMachinesLock for multinode-140200: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 13:57:02.314920    9720 start.go:364] duration metric: took 1.0095ms to acquireMachinesLock for "multinode-140200"
	I0407 13:57:02.315344    9720 start.go:93] Provisioning new machine with config: &{Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clus
terName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountTy
pe:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 13:57:02.315553    9720 start.go:125] createHost starting for "" (driver="hyperv")
	I0407 13:57:02.319739    9720 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 13:57:02.320161    9720 start.go:159] libmachine.API.Create for "multinode-140200" (driver="hyperv")
	I0407 13:57:02.320234    9720 client.go:168] LocalClient.Create starting
	I0407 13:57:02.320756    9720 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 13:57:02.321077    9720 main.go:141] libmachine: Decoding PEM data...
	I0407 13:57:02.321142    9720 main.go:141] libmachine: Parsing certificate...
	I0407 13:57:02.321432    9720 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 13:57:02.321715    9720 main.go:141] libmachine: Decoding PEM data...
	I0407 13:57:02.321764    9720 main.go:141] libmachine: Parsing certificate...
	I0407 13:57:02.321899    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 13:57:04.381004    9720 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 13:57:04.381004    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:04.382085    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 13:57:06.068106    9720 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 13:57:06.068150    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:06.068150    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:57:07.561797    9720 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:57:07.561900    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:07.561973    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:57:11.100237    9720 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:57:11.100306    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:11.103988    9720 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 13:57:11.608038    9720 main.go:141] libmachine: Creating SSH key...
	I0407 13:57:12.015888    9720 main.go:141] libmachine: Creating VM...
	I0407 13:57:12.015888    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 13:57:14.888237    9720 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 13:57:14.888237    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:14.888347    9720 main.go:141] libmachine: Using switch "Default Switch"
	I0407 13:57:14.888462    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 13:57:16.619778    9720 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 13:57:16.620752    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:16.620752    9720 main.go:141] libmachine: Creating VHD
	I0407 13:57:16.620970    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 13:57:20.333569    9720 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 46AF54F9-17AC-4BB0-8933-66D10F06CF18
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 13:57:20.333569    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:20.333827    9720 main.go:141] libmachine: Writing magic tar header
	I0407 13:57:20.333827    9720 main.go:141] libmachine: Writing SSH key tar header
	I0407 13:57:20.347148    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 13:57:23.518873    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:23.518960    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:23.519030    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\disk.vhd' -SizeBytes 20000MB
	I0407 13:57:26.016067    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:26.016067    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:26.017059    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-140200 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0407 13:57:29.552390    9720 main.go:141] libmachine: [stdout =====>] : 
	Name             State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----             ----- ----------- ----------------- ------   ------             -------
	multinode-140200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 13:57:29.552390    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:29.552531    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-140200 -DynamicMemoryEnabled $false
	I0407 13:57:31.716908    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:31.717938    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:31.717956    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-140200 -Count 2
	I0407 13:57:33.980791    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:33.980791    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:33.980791    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-140200 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\boot2docker.iso'
	I0407 13:57:36.631135    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:36.631135    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:36.631336    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-140200 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\disk.vhd'
	I0407 13:57:39.255507    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:39.255712    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:39.255712    9720 main.go:141] libmachine: Starting VM...
	I0407 13:57:39.255712    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-140200
	I0407 13:57:42.256521    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:42.257603    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:42.257603    9720 main.go:141] libmachine: Waiting for host to start...
	I0407 13:57:42.257712    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:57:44.470094    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:57:44.470094    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:44.471098    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:57:46.965869    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:46.965869    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:47.967540    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:57:50.195908    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:57:50.196903    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:50.196928    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:57:52.660971    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:52.660971    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:53.661277    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:57:55.804570    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:57:55.804570    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:55.804654    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:57:58.360418    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:57:58.360623    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:57:59.361414    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:01.553504    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:01.554620    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:01.554674    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:04.073037    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 13:58:04.073037    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:05.073895    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:07.264534    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:07.264534    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:07.265056    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:09.792756    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:09.793754    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:09.793776    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:11.866738    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:11.866798    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:11.866798    9720 machine.go:93] provisionDockerMachine start ...
	I0407 13:58:11.866798    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:13.968221    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:13.968221    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:13.969118    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:16.535830    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:16.535830    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:16.540405    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:58:16.558852    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.92.89 22 <nil> <nil>}
	I0407 13:58:16.558882    9720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 13:58:16.699553    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 13:58:16.699619    9720 buildroot.go:166] provisioning hostname "multinode-140200"
	I0407 13:58:16.699678    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:18.807901    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:18.807901    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:18.807997    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:21.363541    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:21.364046    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:21.369355    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:58:21.370148    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.92.89 22 <nil> <nil>}
	I0407 13:58:21.370148    9720 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-140200 && echo "multinode-140200" | sudo tee /etc/hostname
	I0407 13:58:21.522914    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-140200
	
	I0407 13:58:21.523029    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:23.632824    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:23.633442    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:23.633442    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:26.137778    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:26.138492    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:26.143881    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:58:26.144452    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.92.89 22 <nil> <nil>}
	I0407 13:58:26.144700    9720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-140200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-140200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-140200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 13:58:26.300212    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 13:58:26.300349    9720 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 13:58:26.300349    9720 buildroot.go:174] setting up certificates
	I0407 13:58:26.300349    9720 provision.go:84] configureAuth start
	I0407 13:58:26.300349    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:28.425027    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:28.426129    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:28.426129    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:30.958034    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:30.958095    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:30.958095    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:33.038858    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:33.038858    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:33.039036    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:35.589194    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:35.589194    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:35.589194    9720 provision.go:143] copyHostCerts
	I0407 13:58:35.589944    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 13:58:35.590242    9720 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 13:58:35.590242    9720 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 13:58:35.590888    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 13:58:35.591670    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 13:58:35.592283    9720 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 13:58:35.592283    9720 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 13:58:35.592456    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 13:58:35.593862    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 13:58:35.594194    9720 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 13:58:35.594254    9720 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 13:58:35.594656    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 13:58:35.596432    9720 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-140200 san=[127.0.0.1 172.17.92.89 localhost minikube multinode-140200]
	I0407 13:58:35.736545    9720 provision.go:177] copyRemoteCerts
	I0407 13:58:35.746662    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 13:58:35.746662    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:37.851072    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:37.851072    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:37.851511    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:40.470151    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:40.470248    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:40.470457    9720 sshutil.go:53] new ssh client: &{IP:172.17.92.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 13:58:40.586162    9720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.8394665s)
	I0407 13:58:40.586162    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 13:58:40.587155    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0407 13:58:40.632017    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 13:58:40.632385    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 13:58:40.678669    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 13:58:40.679056    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 13:58:40.724350    9720 provision.go:87] duration metric: took 14.4238466s to configureAuth
	I0407 13:58:40.724350    9720 buildroot.go:189] setting minikube options for container-runtime
	I0407 13:58:40.724997    9720 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 13:58:40.725060    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:42.837307    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:42.837707    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:42.838004    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:45.330955    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:45.331515    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:45.337527    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:58:45.338189    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.92.89 22 <nil> <nil>}
	I0407 13:58:45.338189    9720 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 13:58:45.479260    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 13:58:45.479260    9720 buildroot.go:70] root file system type: tmpfs
	I0407 13:58:45.479260    9720 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 13:58:45.479798    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:47.596703    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:47.597414    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:47.597506    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:50.165545    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:50.165912    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:50.171825    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:58:50.171976    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.92.89 22 <nil> <nil>}
	I0407 13:58:50.172619    9720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 13:58:50.339523    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 13:58:50.339659    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:52.429584    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:52.429584    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:52.429584    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:58:54.919482    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:58:54.919850    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:54.924920    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:58:54.925623    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.92.89 22 <nil> <nil>}
	I0407 13:58:54.925623    9720 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 13:58:57.124267    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 13:58:57.124267    9720 machine.go:96] duration metric: took 45.2571523s to provisionDockerMachine
	I0407 13:58:57.124267    9720 client.go:171] duration metric: took 1m54.80323s to LocalClient.Create
	I0407 13:58:57.124267    9720 start.go:167] duration metric: took 1m54.8033033s to libmachine.API.Create "multinode-140200"
	I0407 13:58:57.124267    9720 start.go:293] postStartSetup for "multinode-140200" (driver="hyperv")
	I0407 13:58:57.124267    9720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 13:58:57.134888    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 13:58:57.134888    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:58:59.188697    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:58:59.189307    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:58:59.189428    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:59:01.831601    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:59:01.832558    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:01.832761    9720 sshutil.go:53] new ssh client: &{IP:172.17.92.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 13:59:01.947993    9720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8130712s)
	I0407 13:59:01.959241    9720 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 13:59:01.969104    9720 command_runner.go:130] > NAME=Buildroot
	I0407 13:59:01.969104    9720 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0407 13:59:01.969104    9720 command_runner.go:130] > ID=buildroot
	I0407 13:59:01.969104    9720 command_runner.go:130] > VERSION_ID=2023.02.9
	I0407 13:59:01.969104    9720 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0407 13:59:01.970077    9720 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 13:59:01.970115    9720 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 13:59:01.970115    9720 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 13:59:01.971417    9720 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 13:59:01.971417    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 13:59:01.982909    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 13:59:02.001506    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 13:59:02.051009    9720 start.go:296] duration metric: took 4.9267074s for postStartSetup
	I0407 13:59:02.053500    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:59:04.168625    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:59:04.168625    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:04.169160    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:59:06.644503    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:59:06.644503    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:06.645492    9720 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 13:59:06.648044    9720 start.go:128] duration metric: took 2m4.3316211s to createHost
	I0407 13:59:06.648212    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:59:08.697903    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:59:08.698562    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:08.698705    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:59:11.135232    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:59:11.135232    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:11.141294    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:59:11.141833    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.92.89 22 <nil> <nil>}
	I0407 13:59:11.141833    9720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 13:59:11.269011    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744034351.288197290
	
	I0407 13:59:11.269011    9720 fix.go:216] guest clock: 1744034351.288197290
	I0407 13:59:11.269011    9720 fix.go:229] Guest: 2025-04-07 13:59:11.28819729 +0000 UTC Remote: 2025-04-07 13:59:06.6482121 +0000 UTC m=+129.939977601 (delta=4.63998519s)
	I0407 13:59:11.269169    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:59:13.335049    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:59:13.335756    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:13.335803    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:59:15.801431    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:59:15.801431    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:15.806506    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 13:59:15.807261    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.92.89 22 <nil> <nil>}
	I0407 13:59:15.807261    9720 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744034351
	I0407 13:59:15.960276    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 13:59:11 UTC 2025
	
	I0407 13:59:15.960276    9720 fix.go:236] clock set: Mon Apr  7 13:59:11 UTC 2025
	 (err=<nil>)
	I0407 13:59:15.960276    9720 start.go:83] releasing machines lock for "multinode-140200", held for 2m13.6444205s
	I0407 13:59:15.960276    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:59:18.072482    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:59:18.073764    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:18.074134    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:59:20.541923    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:59:20.542910    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:20.547773    9720 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 13:59:20.547867    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:59:20.560118    9720 ssh_runner.go:195] Run: cat /version.json
	I0407 13:59:20.560532    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 13:59:22.770759    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:59:22.770759    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:22.770759    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 13:59:22.770759    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:59:22.770759    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:22.770759    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 13:59:25.420996    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:59:25.420996    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:25.421626    9720 sshutil.go:53] new ssh client: &{IP:172.17.92.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 13:59:25.452674    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 13:59:25.452674    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 13:59:25.453631    9720 sshutil.go:53] new ssh client: &{IP:172.17.92.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 13:59:25.520319    9720 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0407 13:59:25.520319    9720 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9725113s)
	W0407 13:59:25.520319    9720 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 13:59:25.554142    9720 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0407 13:59:25.554671    9720 ssh_runner.go:235] Completed: cat /version.json: (4.9945185s)
	I0407 13:59:25.565663    9720 ssh_runner.go:195] Run: systemctl --version
	I0407 13:59:25.575306    9720 command_runner.go:130] > systemd 252 (252)
	I0407 13:59:25.575493    9720 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0407 13:59:25.587292    9720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 13:59:25.594548    9720 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0407 13:59:25.595114    9720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 13:59:25.606782    9720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 13:59:25.633468    9720 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0407 13:59:25.633586    9720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 13:59:25.633586    9720 start.go:495] detecting cgroup driver to use...
	I0407 13:59:25.633847    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0407 13:59:25.637245    9720 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 13:59:25.637245    9720 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 13:59:25.668870    9720 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0407 13:59:25.680467    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 13:59:25.709174    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 13:59:25.733125    9720 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 13:59:25.743448    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 13:59:25.771287    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:59:25.798920    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 13:59:25.826133    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 13:59:25.854259    9720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 13:59:25.882967    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 13:59:25.912152    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 13:59:25.943201    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 13:59:25.969786    9720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 13:59:25.986262    9720 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:59:25.986428    9720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 13:59:25.997143    9720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 13:59:26.036425    9720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 13:59:26.063598    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:59:26.245108    9720 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 13:59:26.274255    9720 start.go:495] detecting cgroup driver to use...
	I0407 13:59:26.285499    9720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 13:59:26.307613    9720 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0407 13:59:26.307676    9720 command_runner.go:130] > [Unit]
	I0407 13:59:26.307676    9720 command_runner.go:130] > Description=Docker Application Container Engine
	I0407 13:59:26.307676    9720 command_runner.go:130] > Documentation=https://docs.docker.com
	I0407 13:59:26.307676    9720 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0407 13:59:26.307676    9720 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0407 13:59:26.307676    9720 command_runner.go:130] > StartLimitBurst=3
	I0407 13:59:26.307744    9720 command_runner.go:130] > StartLimitIntervalSec=60
	I0407 13:59:26.307744    9720 command_runner.go:130] > [Service]
	I0407 13:59:26.307744    9720 command_runner.go:130] > Type=notify
	I0407 13:59:26.307744    9720 command_runner.go:130] > Restart=on-failure
	I0407 13:59:26.307744    9720 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0407 13:59:26.307816    9720 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0407 13:59:26.307816    9720 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0407 13:59:26.307816    9720 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0407 13:59:26.307816    9720 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0407 13:59:26.307816    9720 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0407 13:59:26.307884    9720 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0407 13:59:26.307908    9720 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0407 13:59:26.307948    9720 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0407 13:59:26.307962    9720 command_runner.go:130] > ExecStart=
	I0407 13:59:26.307962    9720 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0407 13:59:26.307962    9720 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0407 13:59:26.307962    9720 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0407 13:59:26.307962    9720 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0407 13:59:26.307962    9720 command_runner.go:130] > LimitNOFILE=infinity
	I0407 13:59:26.308062    9720 command_runner.go:130] > LimitNPROC=infinity
	I0407 13:59:26.308062    9720 command_runner.go:130] > LimitCORE=infinity
	I0407 13:59:26.308062    9720 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0407 13:59:26.308062    9720 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0407 13:59:26.308062    9720 command_runner.go:130] > TasksMax=infinity
	I0407 13:59:26.308062    9720 command_runner.go:130] > TimeoutStartSec=0
	I0407 13:59:26.308062    9720 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0407 13:59:26.308062    9720 command_runner.go:130] > Delegate=yes
	I0407 13:59:26.308145    9720 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0407 13:59:26.308145    9720 command_runner.go:130] > KillMode=process
	I0407 13:59:26.308145    9720 command_runner.go:130] > [Install]
	I0407 13:59:26.308145    9720 command_runner.go:130] > WantedBy=multi-user.target
	I0407 13:59:26.319509    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:59:26.348850    9720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 13:59:26.388810    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 13:59:26.422055    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:59:26.452466    9720 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 13:59:26.514873    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 13:59:26.537900    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 13:59:26.569857    9720 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0407 13:59:26.580732    9720 ssh_runner.go:195] Run: which cri-dockerd
	I0407 13:59:26.585483    9720 command_runner.go:130] > /usr/bin/cri-dockerd
	I0407 13:59:26.594423    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 13:59:26.609748    9720 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 13:59:26.650723    9720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 13:59:26.838204    9720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 13:59:27.011491    9720 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 13:59:27.011702    9720 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 13:59:27.053932    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:59:27.249452    9720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 13:59:29.784314    9720 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.534844s)
	I0407 13:59:29.794291    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 13:59:29.825666    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:59:29.856132    9720 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 13:59:30.036914    9720 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 13:59:30.234555    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:59:30.414707    9720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 13:59:30.449335    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 13:59:30.478915    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:59:30.658839    9720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 13:59:30.758408    9720 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 13:59:30.767064    9720 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 13:59:30.777207    9720 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0407 13:59:30.777207    9720 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0407 13:59:30.777207    9720 command_runner.go:130] > Device: 0,22	Inode: 879         Links: 1
	I0407 13:59:30.777207    9720 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0407 13:59:30.777207    9720 command_runner.go:130] > Access: 2025-04-07 13:59:30.701809484 +0000
	I0407 13:59:30.777207    9720 command_runner.go:130] > Modify: 2025-04-07 13:59:30.701809484 +0000
	I0407 13:59:30.777207    9720 command_runner.go:130] > Change: 2025-04-07 13:59:30.705809509 +0000
	I0407 13:59:30.777207    9720 command_runner.go:130] >  Birth: -
	I0407 13:59:30.777207    9720 start.go:563] Will wait 60s for crictl version
	I0407 13:59:30.787038    9720 ssh_runner.go:195] Run: which crictl
	I0407 13:59:30.792294    9720 command_runner.go:130] > /usr/bin/crictl
	I0407 13:59:30.802796    9720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 13:59:30.859163    9720 command_runner.go:130] > Version:  0.1.0
	I0407 13:59:30.859163    9720 command_runner.go:130] > RuntimeName:  docker
	I0407 13:59:30.859163    9720 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0407 13:59:30.859163    9720 command_runner.go:130] > RuntimeApiVersion:  v1
	I0407 13:59:30.859163    9720 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 13:59:30.867665    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:59:30.897836    9720 command_runner.go:130] > 27.4.0
	I0407 13:59:30.905861    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 13:59:30.937069    9720 command_runner.go:130] > 27.4.0
	I0407 13:59:30.941839    9720 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 13:59:30.941839    9720 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 13:59:30.946672    9720 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 13:59:30.946672    9720 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 13:59:30.946672    9720 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 13:59:30.946672    9720 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 13:59:30.948673    9720 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 13:59:30.948673    9720 ip.go:214] interface addr: 172.17.80.1/20
	I0407 13:59:30.961890    9720 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 13:59:30.967958    9720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:59:30.988174    9720 kubeadm.go:883] updating cluster {Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-1
40200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.92.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p M
ountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 13:59:30.988174    9720 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 13:59:30.995174    9720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 13:59:31.014645    9720 docker.go:689] Got preloaded images: 
	I0407 13:59:31.014645    9720 docker.go:695] registry.k8s.io/kube-apiserver:v1.32.2 wasn't preloaded
	I0407 13:59:31.025300    9720 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0407 13:59:31.049683    9720 command_runner.go:139] > {"Repositories":{}}
	I0407 13:59:31.061793    9720 ssh_runner.go:195] Run: which lz4
	I0407 13:59:31.069985    9720 command_runner.go:130] > /usr/bin/lz4
	I0407 13:59:31.070172    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0407 13:59:31.081434    9720 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0407 13:59:31.087218    9720 command_runner.go:130] ! stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:59:31.087218    9720 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0407 13:59:31.087409    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (349803115 bytes)
	I0407 13:59:32.714909    9720 docker.go:653] duration metric: took 1.6443909s to copy over tarball
	I0407 13:59:32.724765    9720 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0407 13:59:41.382258    9720 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (8.6574317s)
	I0407 13:59:41.382258    9720 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0407 13:59:41.440334    9720 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0407 13:59:41.467298    9720 command_runner.go:139] > {"Repositories":{"gcr.io/k8s-minikube/storage-provisioner":{"gcr.io/k8s-minikube/storage-provisioner:v5":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944":"sha256:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562"},"registry.k8s.io/coredns/coredns":{"registry.k8s.io/coredns/coredns:v1.11.3":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e":"sha256:c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6"},"registry.k8s.io/etcd":{"registry.k8s.io/etcd:3.5.16-0":"sha256:a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5":"sha256:a9e7e6b294baf1695fccb862d95
6c5d3ad8510e1e4ca1535f35dc09f247abbfc"},"registry.k8s.io/kube-apiserver":{"registry.k8s.io/kube-apiserver:v1.32.2":"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","registry.k8s.io/kube-apiserver@sha256:c47449f3e751588ea0cb74e325e0f83db335a415f4f4c7fb147375dd6c84757f":"sha256:85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef"},"registry.k8s.io/kube-controller-manager":{"registry.k8s.io/kube-controller-manager:v1.32.2":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","registry.k8s.io/kube-controller-manager@sha256:399aa50f4d1361c59dc458e634506d02de32613d03a9a614a21058741162ef90":"sha256:b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389"},"registry.k8s.io/kube-proxy":{"registry.k8s.io/kube-proxy:v1.32.2":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","registry.k8s.io/kube-proxy@sha256:83c025f0faa6799fab6645102a98138e39a9a7db2be3bc792c79d72659b1805d":"sha256:f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68f
f49a87c2266ebc5"},"registry.k8s.io/kube-scheduler":{"registry.k8s.io/kube-scheduler:v1.32.2":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","registry.k8s.io/kube-scheduler@sha256:45710d74cfd5aa10a001d0cf81747b77c28617444ffee0503d12f1dcd7450f76":"sha256:d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d"},"registry.k8s.io/pause":{"registry.k8s.io/pause:3.10":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a":"sha256:873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136"}}}
	I0407 13:59:41.467653    9720 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2631 bytes)
	I0407 13:59:41.509002    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:59:41.705660    9720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 13:59:44.843470    9720 ssh_runner.go:235] Completed: sudo systemctl restart docker: (3.1377882s)
	I0407 13:59:44.852989    9720 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 13:59:44.883358    9720 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0407 13:59:44.883476    9720 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0407 13:59:44.883476    9720 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0407 13:59:44.883476    9720 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0407 13:59:44.883476    9720 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0407 13:59:44.883476    9720 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0407 13:59:44.883476    9720 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0407 13:59:44.883476    9720 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 13:59:44.883565    9720 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 13:59:44.883645    9720 cache_images.go:84] Images are preloaded, skipping loading
	I0407 13:59:44.883705    9720 kubeadm.go:934] updating node { 172.17.92.89 8443 v1.32.2 docker true true} ...
	I0407 13:59:44.883899    9720 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-140200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.92.89
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 13:59:44.893060    9720 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 13:59:44.967649    9720 command_runner.go:130] > cgroupfs
	I0407 13:59:44.968066    9720 cni.go:84] Creating CNI manager for ""
	I0407 13:59:44.968066    9720 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0407 13:59:44.968066    9720 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 13:59:44.968233    9720 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.92.89 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-140200 NodeName:multinode-140200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.92.89"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.92.89 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 13:59:44.968525    9720 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.92.89
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-140200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.17.92.89"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.92.89"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 13:59:44.981690    9720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 13:59:45.000271    9720 command_runner.go:130] > kubeadm
	I0407 13:59:45.000271    9720 command_runner.go:130] > kubectl
	I0407 13:59:45.000271    9720 command_runner.go:130] > kubelet
	I0407 13:59:45.000271    9720 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 13:59:45.010244    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 13:59:45.026261    9720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0407 13:59:45.066283    9720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 13:59:45.094461    9720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0407 13:59:45.144029    9720 ssh_runner.go:195] Run: grep 172.17.92.89	control-plane.minikube.internal$ /etc/hosts
	I0407 13:59:45.150667    9720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.92.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 13:59:45.182429    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 13:59:45.359218    9720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 13:59:45.390729    9720 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200 for IP: 172.17.92.89
	I0407 13:59:45.390729    9720 certs.go:194] generating shared ca certs ...
	I0407 13:59:45.390729    9720 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:59:45.392047    9720 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 13:59:45.392534    9720 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 13:59:45.392688    9720 certs.go:256] generating profile certs ...
	I0407 13:59:45.393437    9720 certs.go:363] generating signed profile cert for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\client.key
	I0407 13:59:45.393622    9720 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\client.crt with IP's: []
	I0407 13:59:45.906406    9720 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\client.crt ...
	I0407 13:59:45.907413    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\client.crt: {Name:mk5fdd57c3ff13979389954856becb8292bb172e Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:59:45.908726    9720 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\client.key ...
	I0407 13:59:45.908726    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\client.key: {Name:mk4644ae9437716419de854f327451302e1353a0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:59:45.909858    9720 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.20c55106
	I0407 13:59:45.910759    9720 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.20c55106 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.92.89]
	I0407 13:59:46.058626    9720 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.20c55106 ...
	I0407 13:59:46.058626    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.20c55106: {Name:mkca1cb6552e7f74be3ea7ff179bda5f7fe10dda Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:59:46.059653    9720 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.20c55106 ...
	I0407 13:59:46.060660    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.20c55106: {Name:mkd9aaf8b3d47503d09e16080eebbb59155f9e28 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:59:46.060952    9720 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.20c55106 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt
	I0407 13:59:46.076002    9720 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.20c55106 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key
	I0407 13:59:46.076002    9720 certs.go:363] generating signed profile cert for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key
	I0407 13:59:46.076002    9720 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.crt with IP's: []
	I0407 13:59:46.255013    9720 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.crt ...
	I0407 13:59:46.255013    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.crt: {Name:mkbb7a2d8bed05e454c0f2d9fe83d612c7beac13 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:59:46.256834    9720 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key ...
	I0407 13:59:46.256834    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key: {Name:mk3f403e0541cc8f424aed20c9970e35f71b0dff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 13:59:46.258090    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 13:59:46.258301    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 13:59:46.258592    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 13:59:46.258754    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 13:59:46.258754    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0407 13:59:46.258754    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0407 13:59:46.259314    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0407 13:59:46.271140    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0407 13:59:46.272091    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 13:59:46.272669    9720 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 13:59:46.272669    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 13:59:46.273153    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 13:59:46.273153    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 13:59:46.274091    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 13:59:46.274091    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 13:59:46.275089    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 13:59:46.275089    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 13:59:46.275089    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:59:46.276164    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 13:59:46.322451    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 13:59:46.369213    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 13:59:46.405799    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 13:59:46.447908    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0407 13:59:46.493435    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 13:59:46.535558    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 13:59:46.577363    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 13:59:46.618963    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 13:59:46.662614    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 13:59:46.702393    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 13:59:46.745863    9720 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 13:59:46.787000    9720 ssh_runner.go:195] Run: openssl version
	I0407 13:59:46.795265    9720 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0407 13:59:46.805255    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 13:59:46.833097    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:59:46.839339    9720 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:59:46.839339    9720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:59:46.851187    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 13:59:46.860026    9720 command_runner.go:130] > b5213941
	I0407 13:59:46.871625    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 13:59:46.903493    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 13:59:46.932277    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 13:59:46.939098    9720 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 13:59:46.939098    9720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 13:59:46.950714    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 13:59:46.958212    9720 command_runner.go:130] > 51391683
	I0407 13:59:46.969865    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 13:59:47.002423    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 13:59:47.032539    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 13:59:47.039903    9720 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 13:59:47.039903    9720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 13:59:47.052418    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 13:59:47.061711    9720 command_runner.go:130] > 3ec20f2e
	I0407 13:59:47.071869    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 13:59:47.101053    9720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 13:59:47.107474    9720 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:59:47.107474    9720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 13:59:47.108137    9720 kubeadm.go:392] StartCluster: {Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-1402
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.92.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p Moun
tUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 13:59:47.117526    9720 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 13:59:47.151707    9720 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 13:59:47.168976    9720 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/kubeadm-flags.env': No such file or directory
	I0407 13:59:47.168976    9720 command_runner.go:130] ! ls: cannot access '/var/lib/kubelet/config.yaml': No such file or directory
	I0407 13:59:47.168976    9720 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/etcd': No such file or directory
	I0407 13:59:47.180627    9720 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 13:59:47.208858    9720 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 13:59:47.225089    9720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0407 13:59:47.225676    9720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0407 13:59:47.225676    9720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0407 13:59:47.225676    9720 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:59:47.225676    9720 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 13:59:47.225676    9720 kubeadm.go:157] found existing configuration files:
	
	I0407 13:59:47.237413    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 13:59:47.253151    9720 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:59:47.254198    9720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 13:59:47.265406    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 13:59:47.294572    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 13:59:47.312638    9720 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:59:47.313350    9720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 13:59:47.323166    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 13:59:47.351755    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 13:59:47.367896    9720 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:59:47.367896    9720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 13:59:47.380569    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 13:59:47.406945    9720 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 13:59:47.421823    9720 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:59:47.421995    9720 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 13:59:47.432588    9720 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 13:59:47.448674    9720 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0407 13:59:47.806664    9720 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 13:59:47.806664    9720 kubeadm.go:310] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:00:01.054773    9720 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 14:00:01.054773    9720 command_runner.go:130] > [init] Using Kubernetes version: v1.32.2
	I0407 14:00:01.054931    9720 command_runner.go:130] > [preflight] Running pre-flight checks
	I0407 14:00:01.054931    9720 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 14:00:01.055186    9720 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:00:01.055245    9720 command_runner.go:130] > [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 14:00:01.055528    9720 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:00:01.055528    9720 command_runner.go:130] > [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 14:00:01.055638    9720 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 14:00:01.055638    9720 command_runner.go:130] > [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 14:00:01.055638    9720 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:00:01.055638    9720 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:00:01.060239    9720 out.go:235]   - Generating certificates and keys ...
	I0407 14:00:01.060239    9720 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0407 14:00:01.060239    9720 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 14:00:01.060239    9720 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 14:00:01.060239    9720 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0407 14:00:01.060787    9720 command_runner.go:130] > [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 14:00:01.060787    9720 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 14:00:01.061078    9720 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 14:00:01.061159    9720 command_runner.go:130] > [certs] Generating "front-proxy-ca" certificate and key
	I0407 14:00:01.061211    9720 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 14:00:01.061211    9720 command_runner.go:130] > [certs] Generating "front-proxy-client" certificate and key
	I0407 14:00:01.061211    9720 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 14:00:01.061211    9720 command_runner.go:130] > [certs] Generating "etcd/ca" certificate and key
	I0407 14:00:01.061211    9720 command_runner.go:130] > [certs] Generating "etcd/server" certificate and key
	I0407 14:00:01.061211    9720 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 14:00:01.061761    9720 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [localhost multinode-140200] and IPs [172.17.92.89 127.0.0.1 ::1]
	I0407 14:00:01.061922    9720 command_runner.go:130] > [certs] etcd/server serving cert is signed for DNS names [localhost multinode-140200] and IPs [172.17.92.89 127.0.0.1 ::1]
	I0407 14:00:01.062069    9720 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 14:00:01.062094    9720 command_runner.go:130] > [certs] Generating "etcd/peer" certificate and key
	I0407 14:00:01.062094    9720 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-140200] and IPs [172.17.92.89 127.0.0.1 ::1]
	I0407 14:00:01.062094    9720 command_runner.go:130] > [certs] etcd/peer serving cert is signed for DNS names [localhost multinode-140200] and IPs [172.17.92.89 127.0.0.1 ::1]
	I0407 14:00:01.062094    9720 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 14:00:01.062094    9720 command_runner.go:130] > [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 14:00:01.062722    9720 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 14:00:01.062722    9720 command_runner.go:130] > [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 14:00:01.062896    9720 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 14:00:01.062896    9720 command_runner.go:130] > [certs] Generating "sa" key and public key
	I0407 14:00:01.062896    9720 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:00:01.062896    9720 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:00:01.062896    9720 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:00:01.062896    9720 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:00:01.063434    9720 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 14:00:01.063531    9720 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 14:00:01.063661    9720 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:00:01.063661    9720 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:00:01.063899    9720 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:00:01.063920    9720 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:00:01.063948    9720 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:00:01.063948    9720 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:00:01.063948    9720 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:00:01.063948    9720 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:00:01.063948    9720 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:00:01.063948    9720 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:00:01.070335    9720 out.go:235]   - Booting up control plane ...
	I0407 14:00:01.070500    9720 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:00:01.070570    9720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:00:01.070766    9720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:00:01.070839    9720 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:00:01.070894    9720 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:00:01.070894    9720 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:00:01.070894    9720 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:00:01.070894    9720 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:00:01.070894    9720 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:00:01.071369    9720 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:00:01.071369    9720 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0407 14:00:01.071369    9720 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 14:00:01.071369    9720 command_runner.go:130] > [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 14:00:01.071369    9720 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 14:00:01.071369    9720 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 14:00:01.071369    9720 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 14:00:01.071369    9720 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 1.002240856s
	I0407 14:00:01.071369    9720 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 1.002240856s
	I0407 14:00:01.071369    9720 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 14:00:01.071369    9720 command_runner.go:130] > [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 14:00:01.072420    9720 kubeadm.go:310] [api-check] The API server is healthy after 7.004277252s
	I0407 14:00:01.072420    9720 command_runner.go:130] > [api-check] The API server is healthy after 7.004277252s
	I0407 14:00:01.072420    9720 command_runner.go:130] > [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 14:00:01.072420    9720 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 14:00:01.072420    9720 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 14:00:01.072420    9720 command_runner.go:130] > [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 14:00:01.072420    9720 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 14:00:01.072420    9720 command_runner.go:130] > [upload-certs] Skipping phase. Please see --upload-certs
	I0407 14:00:01.073703    9720 kubeadm.go:310] [mark-control-plane] Marking the node multinode-140200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 14:00:01.073743    9720 command_runner.go:130] > [mark-control-plane] Marking the node multinode-140200 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 14:00:01.073876    9720 command_runner.go:130] > [bootstrap-token] Using token: lxx0ib.fby705jmzz4v43jn
	I0407 14:00:01.073876    9720 kubeadm.go:310] [bootstrap-token] Using token: lxx0ib.fby705jmzz4v43jn
	I0407 14:00:01.077668    9720 out.go:235]   - Configuring RBAC rules ...
	I0407 14:00:01.077668    9720 command_runner.go:130] > [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 14:00:01.077668    9720 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 14:00:01.077668    9720 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 14:00:01.077668    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 14:00:01.077668    9720 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 14:00:01.077668    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 14:00:01.077668    9720 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 14:00:01.077668    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 14:00:01.078879    9720 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 14:00:01.078879    9720 command_runner.go:130] > [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 14:00:01.079160    9720 command_runner.go:130] > [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 14:00:01.079160    9720 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 14:00:01.079425    9720 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 14:00:01.079425    9720 command_runner.go:130] > [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 14:00:01.079425    9720 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 14:00:01.079712    9720 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0407 14:00:01.079835    9720 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 14:00:01.079835    9720 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0407 14:00:01.079835    9720 kubeadm.go:310] 
	I0407 14:00:01.080079    9720 command_runner.go:130] > Your Kubernetes control-plane has initialized successfully!
	I0407 14:00:01.080079    9720 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 14:00:01.080079    9720 kubeadm.go:310] 
	I0407 14:00:01.080303    9720 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 14:00:01.080303    9720 command_runner.go:130] > To start using your cluster, you need to run the following as a regular user:
	I0407 14:00:01.080303    9720 kubeadm.go:310] 
	I0407 14:00:01.080303    9720 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 14:00:01.080507    9720 command_runner.go:130] >   mkdir -p $HOME/.kube
	I0407 14:00:01.080659    9720 command_runner.go:130] >   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 14:00:01.080659    9720 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 14:00:01.080828    9720 command_runner.go:130] >   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 14:00:01.080828    9720 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 14:00:01.080828    9720 kubeadm.go:310] 
	I0407 14:00:01.081029    9720 command_runner.go:130] > Alternatively, if you are the root user, you can run:
	I0407 14:00:01.081029    9720 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 14:00:01.081029    9720 kubeadm.go:310] 
	I0407 14:00:01.081189    9720 command_runner.go:130] >   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 14:00:01.081189    9720 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 14:00:01.081189    9720 kubeadm.go:310] 
	I0407 14:00:01.081329    9720 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 14:00:01.081329    9720 command_runner.go:130] > You should now deploy a pod network to the cluster.
	I0407 14:00:01.081506    9720 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 14:00:01.081506    9720 command_runner.go:130] > Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 14:00:01.081506    9720 command_runner.go:130] >   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 14:00:01.081506    9720 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 14:00:01.081506    9720 kubeadm.go:310] 
	I0407 14:00:01.081506    9720 command_runner.go:130] > You can now join any number of control-plane nodes by copying certificate authorities
	I0407 14:00:01.081506    9720 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 14:00:01.081506    9720 command_runner.go:130] > and service account keys on each node and then running the following as root:
	I0407 14:00:01.081506    9720 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 14:00:01.081506    9720 kubeadm.go:310] 
	I0407 14:00:01.082305    9720 command_runner.go:130] >   kubeadm join control-plane.minikube.internal:8443 --token lxx0ib.fby705jmzz4v43jn \
	I0407 14:00:01.082305    9720 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token lxx0ib.fby705jmzz4v43jn \
	I0407 14:00:01.082487    9720 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 \
	I0407 14:00:01.082487    9720 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 \
	I0407 14:00:01.082487    9720 kubeadm.go:310] 	--control-plane 
	I0407 14:00:01.082487    9720 command_runner.go:130] > 	--control-plane 
	I0407 14:00:01.082487    9720 kubeadm.go:310] 
	I0407 14:00:01.082794    9720 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 14:00:01.082794    9720 command_runner.go:130] > Then you can join any number of worker nodes by running the following on each as root:
	I0407 14:00:01.082794    9720 kubeadm.go:310] 
	I0407 14:00:01.083114    9720 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token lxx0ib.fby705jmzz4v43jn \
	I0407 14:00:01.083114    9720 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token lxx0ib.fby705jmzz4v43jn \
	I0407 14:00:01.083246    9720 command_runner.go:130] > 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 
	I0407 14:00:01.083246    9720 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 
	I0407 14:00:01.083246    9720 cni.go:84] Creating CNI manager for ""
	I0407 14:00:01.083246    9720 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0407 14:00:01.086146    9720 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0407 14:00:01.098159    9720 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0407 14:00:01.106148    9720 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0407 14:00:01.106148    9720 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0407 14:00:01.106148    9720 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0407 14:00:01.106148    9720 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0407 14:00:01.106148    9720 command_runner.go:130] > Access: 2025-04-07 13:58:06.866223000 +0000
	I0407 14:00:01.106148    9720 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0407 14:00:01.106148    9720 command_runner.go:130] > Change: 2025-04-07 13:57:59.004000000 +0000
	I0407 14:00:01.106148    9720 command_runner.go:130] >  Birth: -
	I0407 14:00:01.106148    9720 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0407 14:00:01.106148    9720 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0407 14:00:01.150997    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0407 14:00:01.893773    9720 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet created
	I0407 14:00:01.893773    9720 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet created
	I0407 14:00:01.893773    9720 command_runner.go:130] > serviceaccount/kindnet created
	I0407 14:00:01.893773    9720 command_runner.go:130] > daemonset.apps/kindnet created
	I0407 14:00:01.894111    9720 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 14:00:01.906419    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 14:00:01.906419    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-140200 minikube.k8s.io/updated_at=2025_04_07T14_00_01_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=multinode-140200 minikube.k8s.io/primary=true
	I0407 14:00:01.934463    9720 command_runner.go:130] > -16
	I0407 14:00:01.934546    9720 ops.go:34] apiserver oom_adj: -16
	I0407 14:00:02.126335    9720 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/minikube-rbac created
	I0407 14:00:02.128056    9720 command_runner.go:130] > node/multinode-140200 labeled
	I0407 14:00:02.137601    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 14:00:02.251830    9720 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0407 14:00:02.645361    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 14:00:02.768946    9720 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0407 14:00:03.139758    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 14:00:03.257869    9720 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0407 14:00:03.639095    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 14:00:03.749101    9720 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0407 14:00:04.141604    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 14:00:04.259453    9720 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0407 14:00:04.641018    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 14:00:04.752685    9720 command_runner.go:130] ! Error from server (NotFound): serviceaccounts "default" not found
	I0407 14:00:05.138928    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 14:00:05.248246    9720 command_runner.go:130] > NAME      SECRETS   AGE
	I0407 14:00:05.248246    9720 command_runner.go:130] > default   0         0s
	I0407 14:00:05.248246    9720 kubeadm.go:1113] duration metric: took 3.3541121s to wait for elevateKubeSystemPrivileges
	I0407 14:00:05.248246    9720 kubeadm.go:394] duration metric: took 18.1399821s to StartCluster
	I0407 14:00:05.248246    9720 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:00:05.248246    9720 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:00:05.252898    9720 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:00:05.254820    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 14:00:05.254820    9720 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 14:00:05.255773    9720 addons.go:69] Setting storage-provisioner=true in profile "multinode-140200"
	I0407 14:00:05.255856    9720 addons.go:69] Setting default-storageclass=true in profile "multinode-140200"
	I0407 14:00:05.255912    9720 addons.go:238] Setting addon storage-provisioner=true in "multinode-140200"
	I0407 14:00:05.256292    9720 host.go:66] Checking if "multinode-140200" exists ...
	I0407 14:00:05.255444    9720 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:00:05.254820    9720 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.92.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 14:00:05.256114    9720 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "multinode-140200"
	I0407 14:00:05.257931    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:00:05.258056    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:00:05.259969    9720 out.go:177] * Verifying Kubernetes components...
	I0407 14:00:05.276669    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:00:05.460760    9720 command_runner.go:130] > apiVersion: v1
	I0407 14:00:05.460833    9720 command_runner.go:130] > data:
	I0407 14:00:05.460833    9720 command_runner.go:130] >   Corefile: |
	I0407 14:00:05.460833    9720 command_runner.go:130] >     .:53 {
	I0407 14:00:05.460833    9720 command_runner.go:130] >         errors
	I0407 14:00:05.460888    9720 command_runner.go:130] >         health {
	I0407 14:00:05.460888    9720 command_runner.go:130] >            lameduck 5s
	I0407 14:00:05.460888    9720 command_runner.go:130] >         }
	I0407 14:00:05.460888    9720 command_runner.go:130] >         ready
	I0407 14:00:05.460888    9720 command_runner.go:130] >         kubernetes cluster.local in-addr.arpa ip6.arpa {
	I0407 14:00:05.460888    9720 command_runner.go:130] >            pods insecure
	I0407 14:00:05.460888    9720 command_runner.go:130] >            fallthrough in-addr.arpa ip6.arpa
	I0407 14:00:05.460888    9720 command_runner.go:130] >            ttl 30
	I0407 14:00:05.460888    9720 command_runner.go:130] >         }
	I0407 14:00:05.460888    9720 command_runner.go:130] >         prometheus :9153
	I0407 14:00:05.460888    9720 command_runner.go:130] >         forward . /etc/resolv.conf {
	I0407 14:00:05.460888    9720 command_runner.go:130] >            max_concurrent 1000
	I0407 14:00:05.460888    9720 command_runner.go:130] >         }
	I0407 14:00:05.460888    9720 command_runner.go:130] >         cache 30 {
	I0407 14:00:05.460888    9720 command_runner.go:130] >            disable success cluster.local
	I0407 14:00:05.460888    9720 command_runner.go:130] >            disable denial cluster.local
	I0407 14:00:05.460888    9720 command_runner.go:130] >         }
	I0407 14:00:05.460888    9720 command_runner.go:130] >         loop
	I0407 14:00:05.460888    9720 command_runner.go:130] >         reload
	I0407 14:00:05.460888    9720 command_runner.go:130] >         loadbalance
	I0407 14:00:05.460888    9720 command_runner.go:130] >     }
	I0407 14:00:05.460888    9720 command_runner.go:130] > kind: ConfigMap
	I0407 14:00:05.460888    9720 command_runner.go:130] > metadata:
	I0407 14:00:05.460888    9720 command_runner.go:130] >   creationTimestamp: "2025-04-07T14:00:00Z"
	I0407 14:00:05.460888    9720 command_runner.go:130] >   name: coredns
	I0407 14:00:05.460888    9720 command_runner.go:130] >   namespace: kube-system
	I0407 14:00:05.460888    9720 command_runner.go:130] >   resourceVersion: "252"
	I0407 14:00:05.460888    9720 command_runner.go:130] >   uid: 9ff68dbd-75a3-4b40-9964-de995f6dfcf2
	I0407 14:00:05.460888    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 14:00:05.616851    9720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:00:06.100149    9720 command_runner.go:130] > configmap/coredns replaced
	I0407 14:00:06.100279    9720 start.go:971] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0407 14:00:06.101643    9720 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:00:06.101757    9720 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:00:06.102535    9720 kapi.go:59] client config for multinode-140200: &rest.Config{Host:"https://172.17.92.89:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 14:00:06.102535    9720 kapi.go:59] client config for multinode-140200: &rest.Config{Host:"https://172.17.92.89:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 14:00:06.104125    9720 cert_rotation.go:140] Starting client certificate rotation controller
	I0407 14:00:06.104575    9720 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0407 14:00:06.104697    9720 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0407 14:00:06.104697    9720 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0407 14:00:06.104697    9720 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0407 14:00:06.105153    9720 node_ready.go:35] waiting up to 6m0s for node "multinode-140200" to be "Ready" ...
	I0407 14:00:06.105519    9720 deployment.go:95] "Request Body" body=""
	I0407 14:00:06.105590    9720 type.go:168] "Request Body" body=""
	I0407 14:00:06.105669    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:06.105781    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:06.105669    9720 round_trippers.go:470] GET https://172.17.92.89:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0407 14:00:06.105781    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:06.105881    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:06.105881    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:06.105971    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:06.105881    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:06.144230    9720 round_trippers.go:581] Response Status: 200 OK in 38 milliseconds
	I0407 14:00:06.144381    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:06.144381    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:06.144381    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:06.144381    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:06.144381    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:06.144381    9720 round_trippers.go:587]     Content-Length: 144
	I0407 14:00:06.144381    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:06 GMT
	I0407 14:00:06.144381    9720 round_trippers.go:587]     Audit-Id: 3db151ca-6578-42df-876b-fdcc241c3f1d
	I0407 14:00:06.144522    9720 round_trippers.go:581] Response Status: 200 OK in 38 milliseconds
	I0407 14:00:06.144522    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:06.144522    9720 round_trippers.go:587]     Audit-Id: ce06d3f1-7fab-4654-a205-a303fbb52aed
	I0407 14:00:06.144522    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:06.144522    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:06.144522    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:06.144522    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:06.144522    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:06 GMT
	I0407 14:00:06.150583    9720 deployment.go:95] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 65 30 31  |be-system".*$e01|
		00000040  37 39 30 34 63 2d 32 61  63 36 2d 34 62 39 30 2d  |7904c-2ac6-4b90-|
		00000050  38 64 61 34 2d 62 66 30  37 62 31 65 65 39 34 33  |8da4-bf07b1ee943|
		00000060  64 32 03 33 37 36 38 00  42 08 08 e0 b4 cf bf 06  |d2.3768.B.......|
		00000070  10 00 12 02 08 02 1a 14  08 00 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0407 14:00:06.151065    9720 deployment.go:111] "Request Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 65 30 31  |be-system".*$e01|
		00000040  37 39 30 34 63 2d 32 61  63 36 2d 34 62 39 30 2d  |7904c-2ac6-4b90-|
		00000050  38 64 61 34 2d 62 66 30  37 62 31 65 65 39 34 33  |8da4-bf07b1ee943|
		00000060  64 32 03 33 37 36 38 00  42 08 08 e0 b4 cf bf 06  |d2.3768.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 00 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0407 14:00:06.151138    9720 round_trippers.go:470] PUT https://172.17.92.89:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0407 14:00:06.151194    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:06.151194    9720 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:06.151253    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:06.151253    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:06.151321    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:06.178691    9720 round_trippers.go:581] Response Status: 200 OK in 27 milliseconds
	I0407 14:00:06.178753    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:06.178820    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:06.178820    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:06.178912    9720 round_trippers.go:587]     Content-Length: 144
	I0407 14:00:06.178912    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:06 GMT
	I0407 14:00:06.178912    9720 round_trippers.go:587]     Audit-Id: db3e11dd-1181-4447-9dd7-664316e853b0
	I0407 14:00:06.178954    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:06.178954    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:06.179004    9720 deployment.go:111] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 65 30 31  |be-system".*$e01|
		00000040  37 39 30 34 63 2d 32 61  63 36 2d 34 62 39 30 2d  |7904c-2ac6-4b90-|
		00000050  38 64 61 34 2d 62 66 30  37 62 31 65 65 39 34 33  |8da4-bf07b1ee943|
		00000060  64 32 03 33 38 32 38 00  42 08 08 e0 b4 cf bf 06  |d2.3828.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 00 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0407 14:00:06.605412    9720 type.go:168] "Request Body" body=""
	I0407 14:00:06.605412    9720 deployment.go:95] "Request Body" body=""
	I0407 14:00:06.605412    9720 round_trippers.go:470] GET https://172.17.92.89:8443/apis/apps/v1/namespaces/kube-system/deployments/coredns/scale
	I0407 14:00:06.605412    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:06.605412    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:06.605412    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:06.605412    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:06.606050    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:06.606050    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:06.606050    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:06.614414    9720 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 14:00:06.614414    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:06.614414    9720 round_trippers.go:587]     Audit-Id: c29289ed-07fb-4da9-8735-b7a37cfce8df
	I0407 14:00:06.614414    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:06.614414    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:06.614414    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:06.614414    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:06.614414    9720 round_trippers.go:587]     Content-Length: 144
	I0407 14:00:06.614414    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:06 GMT
	I0407 14:00:06.614414    9720 deployment.go:95] "Response Body" body=<
		00000000  6b 38 73 00 0a 17 0a 0e  61 75 74 6f 73 63 61 6c  |k8s.....autoscal|
		00000010  69 6e 67 2f 76 31 12 05  53 63 61 6c 65 12 6d 0a  |ing/v1..Scale.m.|
		00000020  51 0a 07 63 6f 72 65 64  6e 73 12 00 1a 0b 6b 75  |Q..coredns....ku|
		00000030  62 65 2d 73 79 73 74 65  6d 22 00 2a 24 65 30 31  |be-system".*$e01|
		00000040  37 39 30 34 63 2d 32 61  63 36 2d 34 62 39 30 2d  |7904c-2ac6-4b90-|
		00000050  38 64 61 34 2d 62 66 30  37 62 31 65 65 39 34 33  |8da4-bf07b1ee943|
		00000060  64 32 03 33 39 34 38 00  42 08 08 e0 b4 cf bf 06  |d2.3948.B.......|
		00000070  10 00 12 02 08 01 1a 14  08 01 12 10 6b 38 73 2d  |............k8s-|
		00000080  61 70 70 3d 6b 75 62 65  2d 64 6e 73 1a 00 22 00  |app=kube-dns..".|
	 >
	I0407 14:00:06.614414    9720 kapi.go:214] "coredns" deployment in "kube-system" namespace and "multinode-140200" context rescaled to 1 replicas
	I0407 14:00:06.616879    9720 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0407 14:00:06.616879    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:06.616879    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:06.616879    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:06.616879    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:06.616879    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:06.616879    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:06 GMT
	I0407 14:00:06.616879    9720 round_trippers.go:587]     Audit-Id: 8812fd43-2c92-4c25-b0db-3fb9497b6e0d
	I0407 14:00:06.617546    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:07.106042    9720 type.go:168] "Request Body" body=""
	I0407 14:00:07.106042    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:07.106042    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:07.106042    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:07.106042    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:07.110932    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:07.111040    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:07.111040    9720 round_trippers.go:587]     Audit-Id: fcf6cedd-5dc3-4d33-a709-0eb7e52e973d
	I0407 14:00:07.111040    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:07.111040    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:07.111040    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:07.111040    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:07.111040    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:07 GMT
	I0407 14:00:07.111343    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:07.605955    9720 type.go:168] "Request Body" body=""
	I0407 14:00:07.605955    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:07.605955    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:07.605955    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:07.605955    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:07.609360    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:07.610198    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:07.610198    9720 round_trippers.go:587]     Audit-Id: 75da5282-7e51-4797-89f7-9db9b6cde2ef
	I0407 14:00:07.610198    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:07.610198    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:07.610198    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:07.610198    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:07.610198    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:07 GMT
	I0407 14:00:07.610604    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:07.615103    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:00:07.615159    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:07.617893    9720 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:00:07.621076    9720 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 14:00:07.621147    9720 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 14:00:07.621235    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:00:07.632841    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:00:07.632841    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:07.634452    9720 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:00:07.635296    9720 kapi.go:59] client config for multinode-140200: &rest.Config{Host:"https://172.17.92.89:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 14:00:07.636539    9720 addons.go:238] Setting addon default-storageclass=true in "multinode-140200"
	I0407 14:00:07.636739    9720 host.go:66] Checking if "multinode-140200" exists ...
	I0407 14:00:07.638478    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:00:08.105688    9720 type.go:168] "Request Body" body=""
	I0407 14:00:08.105688    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:08.105688    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:08.105688    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:08.105688    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:08.110313    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:08.110443    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:08.110443    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:08.110443    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:08 GMT
	I0407 14:00:08.110443    9720 round_trippers.go:587]     Audit-Id: ada248da-8867-4405-a50a-99dcdea2e882
	I0407 14:00:08.110443    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:08.110443    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:08.110443    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:08.110895    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:08.111190    9720 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:00:08.605735    9720 type.go:168] "Request Body" body=""
	I0407 14:00:08.605735    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:08.605735    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:08.605735    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:08.605735    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:08.611707    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:08.611822    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:08.611822    9720 round_trippers.go:587]     Audit-Id: e31da2f2-f8d6-4d7a-a3ed-f0fb64632f30
	I0407 14:00:08.611822    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:08.611896    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:08.611896    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:08.611896    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:08.611896    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:08 GMT
	I0407 14:00:08.618043    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:09.106297    9720 type.go:168] "Request Body" body=""
	I0407 14:00:09.106297    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:09.106297    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:09.106297    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:09.106297    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:09.112107    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:09.112172    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:09.112172    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:09 GMT
	I0407 14:00:09.112230    9720 round_trippers.go:587]     Audit-Id: ba2f00f6-467e-4e05-8c1a-f6f0279b0273
	I0407 14:00:09.112230    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:09.112230    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:09.112230    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:09.112296    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:09.112657    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:09.605917    9720 type.go:168] "Request Body" body=""
	I0407 14:00:09.605917    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:09.605917    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:09.605917    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:09.605917    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:09.610248    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:09.610248    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:09.610248    9720 round_trippers.go:587]     Audit-Id: 1adfd2f4-f6e4-4d4b-9ac5-f6d05cca426b
	I0407 14:00:09.610248    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:09.610248    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:09.610248    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:09.610248    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:09.610248    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:09 GMT
	I0407 14:00:09.610248    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:10.008523    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:00:10.008576    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:10.008576    9720 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 14:00:10.008576    9720 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 14:00:10.008576    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:00:10.059116    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:00:10.059647    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:10.059720    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:00:10.106835    9720 type.go:168] "Request Body" body=""
	I0407 14:00:10.106835    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:10.106835    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:10.106835    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:10.106835    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:10.111759    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:10.111911    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:10.111911    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:10.111982    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:10 GMT
	I0407 14:00:10.111982    9720 round_trippers.go:587]     Audit-Id: 8bc8d917-9fce-4a27-8acd-f8fae79f8421
	I0407 14:00:10.111982    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:10.112043    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:10.112080    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:10.112159    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:10.112159    9720 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:00:10.605746    9720 type.go:168] "Request Body" body=""
	I0407 14:00:10.605746    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:10.605746    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:10.605746    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:10.605746    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:10.610373    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:10.610373    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:10.610373    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:10.610373    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:10.610373    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:10 GMT
	I0407 14:00:10.610373    9720 round_trippers.go:587]     Audit-Id: 324452e9-fae2-4cdd-9c2e-d160eab3fd5a
	I0407 14:00:10.610373    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:10.610373    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:10.611368    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:11.106407    9720 type.go:168] "Request Body" body=""
	I0407 14:00:11.106407    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:11.106407    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:11.106407    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:11.106407    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:11.185187    9720 round_trippers.go:581] Response Status: 200 OK in 78 milliseconds
	I0407 14:00:11.185187    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:11.185187    9720 round_trippers.go:587]     Audit-Id: 2d499885-58c1-4c7a-ae20-8f3e94666173
	I0407 14:00:11.185328    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:11.185328    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:11.185328    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:11.185328    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:11.185328    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:11 GMT
	I0407 14:00:11.185613    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:11.606017    9720 type.go:168] "Request Body" body=""
	I0407 14:00:11.606017    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:11.606017    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:11.606017    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:11.606017    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:11.612133    9720 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:00:11.612133    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:11.612244    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:11.612244    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:11.612244    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:11 GMT
	I0407 14:00:11.612244    9720 round_trippers.go:587]     Audit-Id: 128d84e7-5868-4e35-8d19-0348c3957dce
	I0407 14:00:11.612244    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:11.612244    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:11.612631    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:12.106030    9720 type.go:168] "Request Body" body=""
	I0407 14:00:12.106030    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:12.106030    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:12.106030    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:12.106030    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:12.107702    9720 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 14:00:12.107702    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:12.107702    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:12.107702    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:12.107702    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:12.107702    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:12 GMT
	I0407 14:00:12.107702    9720 round_trippers.go:587]     Audit-Id: 5218558d-0cd1-4cd6-a1fc-04a02373f89a
	I0407 14:00:12.107702    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:12.107702    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:12.357771    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:00:12.357771    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:12.358607    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:00:12.608340    9720 type.go:168] "Request Body" body=""
	I0407 14:00:12.608340    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:12.608340    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:12.608340    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:12.608340    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:12.612589    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:12.612669    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:12.612669    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:12.612669    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:12.612669    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:12 GMT
	I0407 14:00:12.612669    9720 round_trippers.go:587]     Audit-Id: f72b5dde-d26f-4c6f-a607-4d4c78d329b1
	I0407 14:00:12.612669    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:12.612669    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:12.613002    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:12.613204    9720 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:00:12.800935    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 14:00:12.801999    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:12.802061    9720 sshutil.go:53] new ssh client: &{IP:172.17.92.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:00:12.943365    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 14:00:13.106439    9720 type.go:168] "Request Body" body=""
	I0407 14:00:13.106439    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:13.106439    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:13.106439    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:13.106439    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:13.110870    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:13.110870    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:13.110870    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:13.110984    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:13.110984    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:13.110984    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:13 GMT
	I0407 14:00:13.110984    9720 round_trippers.go:587]     Audit-Id: 908549e8-eda4-441a-a20c-1cab4de3faf0
	I0407 14:00:13.110984    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:13.111282    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:13.605885    9720 type.go:168] "Request Body" body=""
	I0407 14:00:13.605885    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:13.605885    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:13.605885    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:13.605885    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:13.634485    9720 round_trippers.go:581] Response Status: 200 OK in 28 milliseconds
	I0407 14:00:13.634485    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:13.634485    9720 round_trippers.go:587]     Audit-Id: 9b4cc67f-0686-4221-9e21-66ff3e02602c
	I0407 14:00:13.634485    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:13.634485    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:13.635415    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:13.635415    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:13.635415    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:13 GMT
	I0407 14:00:13.636750    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:13.640778    9720 command_runner.go:130] > serviceaccount/storage-provisioner created
	I0407 14:00:13.640778    9720 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/storage-provisioner created
	I0407 14:00:13.640778    9720 command_runner.go:130] > role.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0407 14:00:13.640778    9720 command_runner.go:130] > rolebinding.rbac.authorization.k8s.io/system:persistent-volume-provisioner created
	I0407 14:00:13.640778    9720 command_runner.go:130] > endpoints/k8s.io-minikube-hostpath created
	I0407 14:00:13.640778    9720 command_runner.go:130] > pod/storage-provisioner created
	I0407 14:00:14.106303    9720 type.go:168] "Request Body" body=""
	I0407 14:00:14.106303    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:14.106303    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:14.106303    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:14.106303    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:14.306442    9720 round_trippers.go:581] Response Status: 200 OK in 200 milliseconds
	I0407 14:00:14.306442    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:14.306442    9720 round_trippers.go:587]     Audit-Id: adc1a46d-342d-41ba-bdb3-e7dba4a6056a
	I0407 14:00:14.306442    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:14.306442    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:14.306442    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:14.306442    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:14.306442    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:14 GMT
	I0407 14:00:14.306861    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:14.606768    9720 type.go:168] "Request Body" body=""
	I0407 14:00:14.607011    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:14.607011    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:14.607011    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:14.607011    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:14.610935    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:14.610935    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:14.610935    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:14.610935    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:14.610935    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:14.610935    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:14 GMT
	I0407 14:00:14.611049    9720 round_trippers.go:587]     Audit-Id: 44d41f6d-f0a5-48c5-9082-c9d69496156e
	I0407 14:00:14.611049    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:14.611599    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:15.007791    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 14:00:15.007969    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:15.008100    9720 sshutil.go:53] new ssh client: &{IP:172.17.92.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:00:15.106428    9720 type.go:168] "Request Body" body=""
	I0407 14:00:15.106428    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:15.106428    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:15.106428    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:15.106428    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:15.110805    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:15.110882    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:15.110882    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:15.110882    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:15 GMT
	I0407 14:00:15.110939    9720 round_trippers.go:587]     Audit-Id: 6da3a4fc-7d25-49cd-a245-5b6dfc7bada4
	I0407 14:00:15.110939    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:15.110939    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:15.110939    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:15.110939    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:15.110939    9720 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:00:15.158846    9720 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 14:00:15.310598    9720 command_runner.go:130] > storageclass.storage.k8s.io/standard created
	I0407 14:00:15.310745    9720 type.go:204] "Request Body" body=""
	I0407 14:00:15.310745    9720 round_trippers.go:470] GET https://172.17.92.89:8443/apis/storage.k8s.io/v1/storageclasses
	I0407 14:00:15.310745    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:15.310745    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:15.310745    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:15.317336    9720 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:00:15.317336    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:15.317336    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:15.317336    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:15.317336    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:15.317419    9720 round_trippers.go:587]     Content-Length: 957
	I0407 14:00:15.317419    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:15 GMT
	I0407 14:00:15.317419    9720 round_trippers.go:587]     Audit-Id: c94d0388-b7fd-4903-aa3c-0976fd0aee03
	I0407 14:00:15.317419    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:15.317507    9720 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 25 0a 11  73 74 6f 72 61 67 65 2e  |k8s..%..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 10 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 4c  69 73 74 12 8b 07 0a 09  |geClassList.....|
		00000030  0a 00 12 03 34 32 30 1a  00 12 fd 06 0a cd 06 0a  |....420.........|
		00000040  08 73 74 61 6e 64 61 72  64 12 00 1a 00 22 00 2a  |.standard....".*|
		00000050  24 37 32 64 31 31 38 62  61 2d 31 37 38 64 2d 34  |$72d118ba-178d-4|
		00000060  64 35 64 2d 38 33 64 37  2d 62 66 32 63 61 62 38  |d5d-83d7-bf2cab8|
		00000070  66 31 31 33 35 32 03 34  32 30 38 00 42 08 08 ef  |f11352.4208.B...|
		00000080  b4 cf bf 06 10 00 5a 2f  0a 1f 61 64 64 6f 6e 6d  |......Z/..addonm|
		00000090  61 6e 61 67 65 72 2e 6b  75 62 65 72 6e 65 74 65  |anager.kubernete|
		000000a0  73 2e 69 6f 2f 6d 6f 64  65 12 0c 45 6e 73 75 72  |s.io/mode..Ensur|
		000000b0  65 45 78 69 73 74 73 62  b7 02 0a 30 6b 75 62 65  |eExistsb...0kube|
		000000c0  63 74 6c 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |ctl.kubernetes. [truncated 3713 chars]
	 >
	I0407 14:00:15.317507    9720 type.go:267] "Request Body" body=<
		00000000  6b 38 73 00 0a 21 0a 11  73 74 6f 72 61 67 65 2e  |k8s..!..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 0c 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 12  fd 06 0a cd 06 0a 08 73  |geClass........s|
		00000030  74 61 6e 64 61 72 64 12  00 1a 00 22 00 2a 24 37  |tandard....".*$7|
		00000040  32 64 31 31 38 62 61 2d  31 37 38 64 2d 34 64 35  |2d118ba-178d-4d5|
		00000050  64 2d 38 33 64 37 2d 62  66 32 63 61 62 38 66 31  |d-83d7-bf2cab8f1|
		00000060  31 33 35 32 03 34 32 30  38 00 42 08 08 ef b4 cf  |1352.4208.B.....|
		00000070  bf 06 10 00 5a 2f 0a 1f  61 64 64 6f 6e 6d 61 6e  |....Z/..addonman|
		00000080  61 67 65 72 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |ager.kubernetes.|
		00000090  69 6f 2f 6d 6f 64 65 12  0c 45 6e 73 75 72 65 45  |io/mode..EnsureE|
		000000a0  78 69 73 74 73 62 b7 02  0a 30 6b 75 62 65 63 74  |xistsb...0kubect|
		000000b0  6c 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |l.kubernetes.io/|
		000000c0  6c 61 73 74 2d 61 70 70  6c 69 65 64 2d 63 6f 6e  |last-applied-co [truncated 3632 chars]
	 >
	I0407 14:00:15.317507    9720 round_trippers.go:470] PUT https://172.17.92.89:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0407 14:00:15.317507    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:15.317507    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:15.317507    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:15.317507    9720 round_trippers.go:480]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:15.322802    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:15.322802    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:15.322802    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:15.322802    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:15.322802    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:15.322802    9720 round_trippers.go:587]     Content-Length: 939
	I0407 14:00:15.322802    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:15 GMT
	I0407 14:00:15.322891    9720 round_trippers.go:587]     Audit-Id: eefb4057-3596-4076-a5b2-39476ab83d98
	I0407 14:00:15.322891    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:15.322958    9720 type.go:267] "Response Body" body=<
		00000000  6b 38 73 00 0a 21 0a 11  73 74 6f 72 61 67 65 2e  |k8s..!..storage.|
		00000010  6b 38 73 2e 69 6f 2f 76  31 12 0c 53 74 6f 72 61  |k8s.io/v1..Stora|
		00000020  67 65 43 6c 61 73 73 12  fd 06 0a cd 06 0a 08 73  |geClass........s|
		00000030  74 61 6e 64 61 72 64 12  00 1a 00 22 00 2a 24 37  |tandard....".*$7|
		00000040  32 64 31 31 38 62 61 2d  31 37 38 64 2d 34 64 35  |2d118ba-178d-4d5|
		00000050  64 2d 38 33 64 37 2d 62  66 32 63 61 62 38 66 31  |d-83d7-bf2cab8f1|
		00000060  31 33 35 32 03 34 32 30  38 00 42 08 08 ef b4 cf  |1352.4208.B.....|
		00000070  bf 06 10 00 5a 2f 0a 1f  61 64 64 6f 6e 6d 61 6e  |....Z/..addonman|
		00000080  61 67 65 72 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |ager.kubernetes.|
		00000090  69 6f 2f 6d 6f 64 65 12  0c 45 6e 73 75 72 65 45  |io/mode..EnsureE|
		000000a0  78 69 73 74 73 62 b7 02  0a 30 6b 75 62 65 63 74  |xistsb...0kubect|
		000000b0  6c 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |l.kubernetes.io/|
		000000c0  6c 61 73 74 2d 61 70 70  6c 69 65 64 2d 63 6f 6e  |last-applied-co [truncated 3632 chars]
	 >
	I0407 14:00:15.326450    9720 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0407 14:00:15.328787    9720 addons.go:514] duration metric: took 10.0738961s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0407 14:00:15.605543    9720 type.go:168] "Request Body" body=""
	I0407 14:00:15.605543    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:15.605543    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:15.605543    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:15.605543    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:15.610478    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:15.610478    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:15.610478    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:15.610478    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:15.610478    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:15 GMT
	I0407 14:00:15.610478    9720 round_trippers.go:587]     Audit-Id: 038eee2d-964c-42b6-87c6-85c44b45aeac
	I0407 14:00:15.610478    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:15.610478    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:15.610846    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:16.105785    9720 type.go:168] "Request Body" body=""
	I0407 14:00:16.105785    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:16.105785    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:16.105785    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:16.105785    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:16.110109    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:16.110193    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:16.110193    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:16.110193    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:16.110193    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:16.110193    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:16 GMT
	I0407 14:00:16.110193    9720 round_trippers.go:587]     Audit-Id: b075fa13-cb90-4033-916e-fe299898a587
	I0407 14:00:16.110260    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:16.111285    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:16.606432    9720 type.go:168] "Request Body" body=""
	I0407 14:00:16.606432    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:16.606432    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:16.606432    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:16.606432    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:16.611737    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:16.611737    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:16.611737    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:16.611737    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:16.611737    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:16 GMT
	I0407 14:00:16.611737    9720 round_trippers.go:587]     Audit-Id: a98828ef-0ecd-4862-86ce-f565da8836ed
	I0407 14:00:16.611737    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:16.611737    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:16.612568    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:17.105727    9720 type.go:168] "Request Body" body=""
	I0407 14:00:17.106191    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:17.106191    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:17.106191    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:17.106191    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:17.110185    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:17.110185    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:17.110298    9720 round_trippers.go:587]     Audit-Id: 9f942de3-7db6-4bac-93a1-4fb490ae3595
	I0407 14:00:17.110298    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:17.110298    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:17.110298    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:17.110298    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:17.110298    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:17 GMT
	I0407 14:00:17.110620    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:17.606738    9720 type.go:168] "Request Body" body=""
	I0407 14:00:17.607230    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:17.607360    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:17.607360    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:17.607360    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:17.611082    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:17.611082    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:17.611339    9720 round_trippers.go:587]     Audit-Id: 54c864de-6939-47bd-a575-7d2ec252e0d0
	I0407 14:00:17.611339    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:17.611339    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:17.611339    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:17.611339    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:17.611339    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:17 GMT
	I0407 14:00:17.611808    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:17.611948    9720 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:00:18.106213    9720 type.go:168] "Request Body" body=""
	I0407 14:00:18.106295    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:18.106295    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:18.106295    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:18.106295    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:18.110074    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:18.110074    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:18.110170    9720 round_trippers.go:587]     Audit-Id: f301f136-0976-4867-bb77-6c7287e4cf20
	I0407 14:00:18.110170    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:18.110170    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:18.110170    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:18.110170    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:18.110170    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:18 GMT
	I0407 14:00:18.110582    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:18.606437    9720 type.go:168] "Request Body" body=""
	I0407 14:00:18.606437    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:18.606437    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:18.606437    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:18.606437    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:18.610734    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:18.610734    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:18.610734    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:18.610802    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:18 GMT
	I0407 14:00:18.610802    9720 round_trippers.go:587]     Audit-Id: ce0dbec7-6370-410f-a394-7ccce2249854
	I0407 14:00:18.610802    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:18.610802    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:18.610802    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:18.610802    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:19.105912    9720 type.go:168] "Request Body" body=""
	I0407 14:00:19.105912    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:19.105912    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:19.105912    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:19.105912    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:19.110218    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:19.110320    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:19.110320    9720 round_trippers.go:587]     Audit-Id: 3918b2b9-9fe3-4483-bf72-3d357c196edc
	I0407 14:00:19.110320    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:19.110320    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:19.110320    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:19.110320    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:19.110399    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:19 GMT
	I0407 14:00:19.111274    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:19.606021    9720 type.go:168] "Request Body" body=""
	I0407 14:00:19.606322    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:19.606322    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:19.606322    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:19.606322    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:19.613669    9720 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:00:19.613731    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:19.613731    9720 round_trippers.go:587]     Audit-Id: ea966ee1-91ad-4b15-8d19-94239996246c
	I0407 14:00:19.613731    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:19.613731    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:19.613731    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:19.613731    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:19.613731    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:19 GMT
	I0407 14:00:19.614441    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:19.614441    9720 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:00:20.106377    9720 type.go:168] "Request Body" body=""
	I0407 14:00:20.107037    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:20.107037    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:20.107037    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:20.107037    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:20.114754    9720 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:00:20.114818    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:20.114818    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:20.114818    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:20.114818    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:20.114818    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:20 GMT
	I0407 14:00:20.114818    9720 round_trippers.go:587]     Audit-Id: 979e308f-d9d2-44dd-a87c-38a511479e35
	I0407 14:00:20.114818    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:20.114892    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:20.605991    9720 type.go:168] "Request Body" body=""
	I0407 14:00:20.605991    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:20.605991    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:20.605991    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:20.605991    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:20.612661    9720 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:00:20.612743    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:20.612743    9720 round_trippers.go:587]     Audit-Id: 0b5737ff-fe12-45de-9972-9e19bd04bea2
	I0407 14:00:20.612743    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:20.612743    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:20.612743    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:20.612743    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:20.612743    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:20 GMT
	I0407 14:00:20.613064    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:21.106434    9720 type.go:168] "Request Body" body=""
	I0407 14:00:21.106823    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:21.107218    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:21.107218    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:21.107218    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:21.110731    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:21.110731    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:21.110731    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:21 GMT
	I0407 14:00:21.110731    9720 round_trippers.go:587]     Audit-Id: 786bdd1d-975c-4e93-b517-65453a91ab3e
	I0407 14:00:21.110731    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:21.110731    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:21.110810    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:21.110810    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:21.111326    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:21.606248    9720 type.go:168] "Request Body" body=""
	I0407 14:00:21.606248    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:21.606248    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:21.606248    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:21.606248    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:21.611226    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:21.611498    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:21.611498    9720 round_trippers.go:587]     Audit-Id: 7f817919-23ea-4687-9c66-2dee98d15b70
	I0407 14:00:21.611498    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:21.611498    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:21.611498    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:21.611498    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:21.611498    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:21 GMT
	I0407 14:00:21.611498    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:22.106278    9720 type.go:168] "Request Body" body=""
	I0407 14:00:22.106278    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:22.106278    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:22.106278    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:22.106278    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:22.110415    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:22.110415    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:22.110415    9720 round_trippers.go:587]     Audit-Id: 78bb940f-b95c-4017-8343-1a3f91c5a135
	I0407 14:00:22.110415    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:22.110415    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:22.110415    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:22.110415    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:22.110415    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:22 GMT
	I0407 14:00:22.110906    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:22.111081    9720 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:00:22.605605    9720 type.go:168] "Request Body" body=""
	I0407 14:00:22.605605    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:22.605605    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:22.605605    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:22.605605    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:22.610976    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:22.611435    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:22.611435    9720 round_trippers.go:587]     Audit-Id: 334b3133-5001-450d-a555-bd56a378e92e
	I0407 14:00:22.611435    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:22.611435    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:22.611435    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:22.611435    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:22.611435    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:22 GMT
	I0407 14:00:22.611888    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:23.106444    9720 type.go:168] "Request Body" body=""
	I0407 14:00:23.106444    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:23.106444    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:23.106444    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:23.106444    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:23.110862    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:23.110899    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:23.110899    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:23.110974    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:23.110974    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:23.110974    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:23 GMT
	I0407 14:00:23.110974    9720 round_trippers.go:587]     Audit-Id: 6e669aa6-57f3-499c-88de-2834e26e1694
	I0407 14:00:23.110974    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:23.111993    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:23.606208    9720 type.go:168] "Request Body" body=""
	I0407 14:00:23.606768    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:23.606768    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:23.606768    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:23.606768    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:23.612033    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:23.612033    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:23.612102    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:23.612102    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:23.612102    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:23.612102    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:23 GMT
	I0407 14:00:23.612102    9720 round_trippers.go:587]     Audit-Id: adc91530-09ff-49e8-ba9d-da5059e3d2a0
	I0407 14:00:23.612102    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:23.612733    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:24.106005    9720 type.go:168] "Request Body" body=""
	I0407 14:00:24.106125    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:24.106125    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:24.106125    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:24.106125    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:24.110585    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:24.110585    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:24.110585    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:24.110585    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:24.110585    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:24.110585    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:24 GMT
	I0407 14:00:24.110585    9720 round_trippers.go:587]     Audit-Id: e81cd949-f868-4d33-89c9-4099aefbc8a6
	I0407 14:00:24.110585    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:24.110585    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:24.111325    9720 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:00:24.607095    9720 type.go:168] "Request Body" body=""
	I0407 14:00:24.607206    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:24.607275    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:24.607275    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:24.607275    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:24.613173    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:24.614252    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:24.614252    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:24.614252    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:24.614252    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:24.614252    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:24 GMT
	I0407 14:00:24.614252    9720 round_trippers.go:587]     Audit-Id: 5a2164de-ada6-4c72-b0a5-bf253adb42f2
	I0407 14:00:24.614252    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:24.616474    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:25.106419    9720 type.go:168] "Request Body" body=""
	I0407 14:00:25.106751    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:25.106751    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:25.106819    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:25.106819    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:25.113360    9720 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:00:25.113360    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:25.113360    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:25 GMT
	I0407 14:00:25.113360    9720 round_trippers.go:587]     Audit-Id: fde2e914-209a-43a4-8a8f-1cc0997fa2ba
	I0407 14:00:25.113360    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:25.113360    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:25.113360    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:25.113360    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:25.114054    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:25.606220    9720 type.go:168] "Request Body" body=""
	I0407 14:00:25.606220    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:25.606220    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:25.606220    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:25.606220    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:25.610456    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:25.610578    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:25.610578    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:25.610578    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:25.610639    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:25.610655    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:25 GMT
	I0407 14:00:25.610655    9720 round_trippers.go:587]     Audit-Id: 52deeb21-1e86-4b56-a4c9-80ddb3b2564d
	I0407 14:00:25.610655    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:25.611481    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 ba 22 0a c6 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 33 36  33 38 00 42 08 08 dd b4  |d4df2.3638.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20923 chars]
	 >
	I0407 14:00:26.106234    9720 type.go:168] "Request Body" body=""
	I0407 14:00:26.106410    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:26.106410    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:26.106410    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:26.106485    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:26.109166    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:00:26.110008    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:26.110008    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:26.110008    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:26.110008    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:26.110074    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:26.110074    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:26 GMT
	I0407 14:00:26.110074    9720 round_trippers.go:587]     Audit-Id: 7d358ea4-4a66-4acd-9c03-2f67caebc3a4
	I0407 14:00:26.110717    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:26.110990    9720 node_ready.go:49] node "multinode-140200" has status "Ready":"True"
	I0407 14:00:26.111106    9720 node_ready.go:38] duration metric: took 20.0055216s for node "multinode-140200" to be "Ready" ...
	I0407 14:00:26.111106    9720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:00:26.111237    9720 type.go:204] "Request Body" body=""
	I0407 14:00:26.111268    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods
	I0407 14:00:26.111322    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:26.111322    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:26.111322    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:26.118950    9720 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:00:26.118950    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:26.118950    9720 round_trippers.go:587]     Audit-Id: 08173164-39aa-4fd7-930a-b940f1df309c
	I0407 14:00:26.118950    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:26.118950    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:26.118950    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:26.118950    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:26.118950    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:26 GMT
	I0407 14:00:26.122835    9720 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 cf c5 02 0a  09 0a 00 12 03 34 33 34  |ist..........434|
		00000020  1a 00 12 d5 26 0a 8b 19  0a 18 63 6f 72 65 64 6e  |....&.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 35 66 70  |s-668d6bf9bc-5fp|
		00000040  34 66 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |4f..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  34 33 37 32 32 36 61 65  |stem".*$437226ae|
		00000070  2d 65 36 33 64 2d 34 32  34 35 2d 62 62 65 61 2d  |-e63d-4245-bbea-|
		00000080  61 64 35 63 34 31 66 66  39 61 39 33 32 03 34 33  |ad5c41ff9a932.43|
		00000090  34 38 00 42 08 08 e6 b4  cf bf 06 10 00 5a 13 0a  |48.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 204923 chars]
	 >
	I0407 14:00:26.123831    9720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:26.124041    9720 type.go:168] "Request Body" body=""
	I0407 14:00:26.124109    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:00:26.124135    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:26.124135    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:26.124176    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:26.126817    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:00:26.126817    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:26.126817    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:26 GMT
	I0407 14:00:26.126817    9720 round_trippers.go:587]     Audit-Id: 55fb75f5-31ad-463f-8a7f-0d2912e5201a
	I0407 14:00:26.126817    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:26.126817    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:26.126817    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:26.126817    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:26.126817    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d5 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 03 34 33 34 38 00  |c41ff9a932.4348.|
		00000080  42 08 08 e6 b4 cf bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23540 chars]
	 >
	I0407 14:00:26.126817    9720 type.go:168] "Request Body" body=""
	I0407 14:00:26.128294    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:26.128294    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:26.128294    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:26.128294    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:26.131531    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:26.131632    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:26.131667    9720 round_trippers.go:587]     Audit-Id: 55d7efb7-1404-48d0-a08e-8f992d2a6089
	I0407 14:00:26.131667    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:26.131667    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:26.131667    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:26.131667    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:26.131667    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:26 GMT
	I0407 14:00:26.131931    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:26.624174    9720 type.go:168] "Request Body" body=""
	I0407 14:00:26.624174    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:00:26.624174    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:26.624174    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:26.624174    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:26.632414    9720 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0407 14:00:26.632414    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:26.632414    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:26.632495    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:26.632495    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:26.632495    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:26.632495    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:26 GMT
	I0407 14:00:26.632495    9720 round_trippers.go:587]     Audit-Id: 4237d14b-86f6-41c7-9b3d-4b6f3012cd3b
	I0407 14:00:26.635362    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d5 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 03 34 33 34 38 00  |c41ff9a932.4348.|
		00000080  42 08 08 e6 b4 cf bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23540 chars]
	 >
	I0407 14:00:26.635362    9720 type.go:168] "Request Body" body=""
	I0407 14:00:26.636368    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:26.636368    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:26.636368    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:26.636368    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:26.643370    9720 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:00:26.643370    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:26.643370    9720 round_trippers.go:587]     Audit-Id: d10b7a8e-f8bd-4995-8f57-620b8c594881
	I0407 14:00:26.643370    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:26.643370    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:26.643370    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:26.644391    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:26.644391    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:26 GMT
	I0407 14:00:26.644391    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:27.124544    9720 type.go:168] "Request Body" body=""
	I0407 14:00:27.124544    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:00:27.124544    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:27.124544    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:27.124544    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:27.129606    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:27.129606    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:27.129606    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:27.129606    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:27.129606    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:27.129606    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:27 GMT
	I0407 14:00:27.129606    9720 round_trippers.go:587]     Audit-Id: 43acbd37-d942-4b8b-bd93-d60fbf2330e4
	I0407 14:00:27.129606    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:27.130251    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d5 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 03 34 33 34 38 00  |c41ff9a932.4348.|
		00000080  42 08 08 e6 b4 cf bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23540 chars]
	 >
	I0407 14:00:27.130572    9720 type.go:168] "Request Body" body=""
	I0407 14:00:27.130572    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:27.130802    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:27.130802    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:27.130802    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:27.142541    9720 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0407 14:00:27.142541    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:27.142541    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:27.142541    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:27.142541    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:27 GMT
	I0407 14:00:27.142541    9720 round_trippers.go:587]     Audit-Id: 62999923-9ced-4e41-8db3-236784373575
	I0407 14:00:27.142541    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:27.142541    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:27.144537    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:27.624500    9720 type.go:168] "Request Body" body=""
	I0407 14:00:27.624500    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:00:27.624500    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:27.624500    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:27.624500    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:27.628834    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:27.628834    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:27.628988    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:27.628988    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:27.628988    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:27.628988    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:27 GMT
	I0407 14:00:27.628988    9720 round_trippers.go:587]     Audit-Id: cf31dde4-7553-47cd-9a30-82cc18da7a69
	I0407 14:00:27.628988    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:27.629378    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d5 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 03 34 33 34 38 00  |c41ff9a932.4348.|
		00000080  42 08 08 e6 b4 cf bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23540 chars]
	 >
	I0407 14:00:27.629710    9720 type.go:168] "Request Body" body=""
	I0407 14:00:27.629756    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:27.629756    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:27.629756    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:27.629756    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:27.635696    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:27.635772    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:27.635772    9720 round_trippers.go:587]     Audit-Id: 06f2bdf7-5b3d-4689-b0e5-73cc432f6e64
	I0407 14:00:27.635772    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:27.635772    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:27.635772    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:27.635772    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:27.635772    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:27 GMT
	I0407 14:00:27.636609    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:28.125228    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.125228    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:00:28.125228    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.125228    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.125228    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.128689    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:28.128798    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.128798    9720 round_trippers.go:587]     Audit-Id: 269497b1-e16f-4b9f-8dc2-0f2c368926ad
	I0407 14:00:28.128798    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.128798    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.128798    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.128798    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.128798    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.129116    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d5 26 0a 8b 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.&.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 03 34 33 34 38 00  |c41ff9a932.4348.|
		00000080  42 08 08 e6 b4 cf bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 23540 chars]
	 >
	I0407 14:00:28.129413    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.129488    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:28.129541    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.129541    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.129541    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.143373    9720 round_trippers.go:581] Response Status: 200 OK in 13 milliseconds
	I0407 14:00:28.143373    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.143373    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.143454    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.143454    9720 round_trippers.go:587]     Audit-Id: 65890ab2-087a-4ed4-94c1-89802fcca304
	I0407 14:00:28.143454    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.143454    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.143454    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.144410    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:28.144692    9720 pod_ready.go:103] pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace has status "Ready":"False"
	I0407 14:00:28.624505    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.624600    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:00:28.624681    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.624681    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.624681    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.628650    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:28.628650    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.628650    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.628650    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.628650    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.628650    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.628650    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.628650    9720 round_trippers.go:587]     Audit-Id: 645c923e-ba2b-42d6-9162-021c6ad34ea2
	I0407 14:00:28.629274    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ce 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 03 34 34 38 38 00  |c41ff9a932.4488.|
		00000080  42 08 08 e6 b4 cf bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24165 chars]
	 >
	I0407 14:00:28.629274    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.629274    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:28.629274    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.629274    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.629818    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.633118    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:28.633212    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.633212    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.633212    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.633212    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.633212    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.633212    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.633212    9720 round_trippers.go:587]     Audit-Id: cec3f536-94dd-4e45-b4db-6c1b8e7ad2df
	I0407 14:00:28.634084    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:28.634084    9720 pod_ready.go:93] pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace has status "Ready":"True"
	I0407 14:00:28.634084    9720 pod_ready.go:82] duration metric: took 2.5101172s for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.634084    9720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.634084    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.634084    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-140200
	I0407 14:00:28.634084    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.634084    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.634084    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.636855    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:00:28.636855    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.636855    9720 round_trippers.go:587]     Audit-Id: 072ee062-90ff-45d4-bb82-cf4d775eb414
	I0407 14:00:28.636855    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.636855    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.636855    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.636855    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.636855    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.638170    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  95 2b 0a 9a 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 31 34  30 32 30 30 12 00 1a 0b  |inode-140200....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 62  |kube-system".*$b|
		00000040  33 34 35 63 35 66 32 2d  65 36 63 30 2d 34 65 66  |345c5f2-e6c0-4ef|
		00000050  62 2d 39 31 30 65 2d 62  32 38 39 36 36 66 65 30  |b-910e-b28966fe0|
		00000060  33 32 64 32 03 34 30 37  38 00 42 08 08 e0 b4 cf  |32d2.4078.B.....|
		00000070  bf 06 10 00 5a 11 0a 09  63 6f 6d 70 6f 6e 65 6e  |....Z...componen|
		00000080  74 12 04 65 74 63 64 5a  15 0a 04 74 69 65 72 12  |t..etcdZ...tier.|
		00000090  0d 63 6f 6e 74 72 6f 6c  2d 70 6c 61 6e 65 62 4d  |.control-planebM|
		000000a0  0a 30 6b 75 62 65 61 64  6d 2e 6b 75 62 65 72 6e  |.0kubeadm.kubern|
		000000b0  65 74 65 73 2e 69 6f 2f  65 74 63 64 2e 61 64 76  |etes.io/etcd.adv|
		000000c0  65 72 74 69 73 65 2d 63  6c 69 65 6e 74 2d 75 72  |ertise-client-u [truncated 26384 chars]
	 >
	I0407 14:00:28.638170    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.638170    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:28.638170    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.638170    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.638170    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.643232    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:28.643330    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.643330    9720 round_trippers.go:587]     Audit-Id: 1d4e54c5-f97d-49b4-b354-354b1cc0b732
	I0407 14:00:28.643330    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.643330    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.643330    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.643330    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.643330    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.643599    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:28.643599    9720 pod_ready.go:93] pod "etcd-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:00:28.644198    9720 pod_ready.go:82] duration metric: took 10.0436ms for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.644198    9720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.644198    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.644364    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-140200
	I0407 14:00:28.644364    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.644364    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.644364    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.646759    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:00:28.646759    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.646759    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.646759    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.646759    9720 round_trippers.go:587]     Audit-Id: cb0e9cad-489c-490d-b84f-63070e3bef32
	I0407 14:00:28.646759    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.646759    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.646759    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.646759    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  fb 33 0a aa 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.3.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 30 66 37 38 32 39 30  |ystem".*$0f78290|
		00000050  36 2d 39 38 63 32 2d 34  64 65 64 2d 39 61 39 38  |6-98c2-4ded-9a98|
		00000060  2d 62 63 37 62 31 34 33  35 30 62 30 36 32 03 33  |-bc7b14350b062.3|
		00000070  35 33 38 00 42 08 08 e0  b4 cf bf 06 10 00 5a 1b  |538.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 61 70 69 73 65 72  76 65 72 5a 15 0a 04 74  |e-apiserverZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 54 0a 3f 6b 75  62 65 61 64 6d 2e 6b 75  |nebT.?kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 6b 75 62 65  |bernetes.io/kub [truncated 31983 chars]
	 >
	I0407 14:00:28.646759    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.646759    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:28.646759    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.646759    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.646759    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.649526    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:00:28.649526    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.649526    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.649526    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.649526    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.650577    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.650577    9720 round_trippers.go:587]     Audit-Id: 9b22bca5-b435-4e44-bfce-cf2199cc685e
	I0407 14:00:28.650577    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.650630    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:28.650630    9720 pod_ready.go:93] pod "kube-apiserver-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:00:28.650630    9720 pod_ready.go:82] duration metric: took 6.4328ms for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.650630    9720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.650630    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.650630    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-140200
	I0407 14:00:28.650630    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.650630    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.650630    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.655513    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:28.655549    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.655549    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.655549    9720 round_trippers.go:587]     Audit-Id: 6350d248-5d47-469c-beca-06dfbc1281b2
	I0407 14:00:28.655549    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.655549    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.655599    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.655599    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.655978    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  e6 30 0a 98 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.0....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 31 34 30 32 30 30 12  |ultinode-140200.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 61 37 63 36 65 33  62 62 2d 31 39 37 63 2d  |*$a7c6e3bb-197c-|
		00000060  34 33 34 65 2d 39 66 31  39 2d 37 34 64 37 65 34  |434e-9f19-74d7e4|
		00000070  38 62 35 30 64 65 32 03  34 30 31 38 00 42 08 08  |8b50de2.4018.B..|
		00000080  e0 b4 cf bf 06 10 00 5a  24 0a 09 63 6f 6d 70 6f  |.......Z$..compo|
		00000090  6e 65 6e 74 12 17 6b 75  62 65 2d 63 6f 6e 74 72  |nent..kube-contr|
		000000a0  6f 6c 6c 65 72 2d 6d 61  6e 61 67 65 72 5a 15 0a  |oller-managerZ..|
		000000b0  04 74 69 65 72 12 0d 63  6f 6e 74 72 6f 6c 2d 70  |.tier..control-p|
		000000c0  6c 61 6e 65 62 3d 0a 19  6b 75 62 65 72 6e 65 74  |laneb=..kuberne [truncated 29940 chars]
	 >
	I0407 14:00:28.656006    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.656006    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:28.656006    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.656006    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.656006    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.658890    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:00:28.658890    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.658890    9720 round_trippers.go:587]     Audit-Id: a97676b2-36af-4765-9716-712cfed89c28
	I0407 14:00:28.658890    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.658890    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.658890    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.658890    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.658890    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.658890    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:28.658890    9720 pod_ready.go:93] pod "kube-controller-manager-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:00:28.658890    9720 pod_ready.go:82] duration metric: took 8.2599ms for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.658890    9720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.658890    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.658890    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:00:28.658890    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.658890    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.658890    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.662175    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:28.662175    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.662175    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.662175    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.662175    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.662425    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.662425    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.662425    9720 round_trippers.go:587]     Audit-Id: 33d941e3-26ed-450c-bf2f-3da593d10b15
	I0407 14:00:28.662513    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  98 25 0a be 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 39 72 78 32 64 12  0b 6b 75 62 65 2d 70 72  |y-9rx2d..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 32 65 61  61 62 32 35 64 2d 66 65  |m".*$2eaab25d-fe|
		00000050  30 62 2d 34 63 34 38 2d  61 63 36 62 2d 34 32 30  |0b-4c48-ac6b-420|
		00000060  39 35 66 35 66 62 63 65  36 32 03 33 39 39 38 00  |95f5fbce62.3998.|
		00000070  42 08 08 e5 b4 cf bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22595 chars]
	 >
	I0407 14:00:28.662513    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.662513    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:28.662513    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.662513    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.663045    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.665381    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:00:28.665640    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.665640    9720 round_trippers.go:587]     Audit-Id: 65a8ddd3-c95c-4b59-8042-a8678dd66007
	I0407 14:00:28.665640    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.665640    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.665640    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.665640    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.665640    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.665936    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:28.665936    9720 pod_ready.go:93] pod "kube-proxy-9rx2d" in "kube-system" namespace has status "Ready":"True"
	I0407 14:00:28.665936    9720 pod_ready.go:82] duration metric: took 7.0456ms for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.665936    9720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:28.665936    9720 type.go:168] "Request Body" body=""
	I0407 14:00:28.824940    9720 request.go:661] Waited for 159.0028ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:00:28.824940    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:00:28.824940    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:28.824940    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:28.824940    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:28.829299    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:28.829411    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:28.829411    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:28.829411    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:28.829455    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:28 GMT
	I0407 14:00:28.829455    9720 round_trippers.go:587]     Audit-Id: b47d0c0c-6ca0-4842-a9ee-81acaed5adbb
	I0407 14:00:28.829455    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:28.829455    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:28.829743    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  f1 22 0a 80 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.".....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 38 38 64 66 65 65 65  |ystem".*$88dfeee|
		00000050  38 2d 61 33 63 31 2d 34  38 35 62 2d 61 62 66 65  |8-a3c1-485b-abfe|
		00000060  2d 39 65 61 66 30 30 35  37 64 36 63 66 32 03 34  |-9eaf0057d6cf2.4|
		00000070  30 35 38 00 42 08 08 e0  b4 cf bf 06 10 00 5a 1b  |058.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 73 63 68 65 64 75  6c 65 72 5a 15 0a 04 74  |e-schedulerZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 3d 0a 19 6b 75  62 65 72 6e 65 74 65 73  |neb=..kubernetes|
		000000c0  2e 69 6f 2f 63 6f 6e 66  69 67 2e 68 61 73 68 12  |.io/config.hash [truncated 21166 chars]
	 >
	I0407 14:00:28.829743    9720 type.go:168] "Request Body" body=""
	I0407 14:00:29.025071    9720 request.go:661] Waited for 195.3262ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:29.025071    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:00:29.025071    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:29.025071    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:29.025071    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:29.028985    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:00:29.028985    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:29.028985    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:29.028985    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:29.028985    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:29 GMT
	I0407 14:00:29.028985    9720 round_trippers.go:587]     Audit-Id: 84dbe6bc-e051-45d9-a080-3d8400c70ff2
	I0407 14:00:29.028985    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:29.028985    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:29.029319    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 c1 21 0a fc 10 0a 10  6d 75 6c 74 69 6e 6f 64  |..!.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 32  38 38 00 42 08 08 dd b4  |d4df2.4288.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 20298 chars]
	 >
	I0407 14:00:29.029491    9720 pod_ready.go:93] pod "kube-scheduler-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:00:29.029491    9720 pod_ready.go:82] duration metric: took 363.5523ms for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:00:29.029618    9720 pod_ready.go:39] duration metric: took 2.9183645s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:00:29.029618    9720 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:00:29.042168    9720 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:00:29.082343    9720 command_runner.go:130] > 2102
	I0407 14:00:29.083303    9720 api_server.go:72] duration metric: took 23.8261756s to wait for apiserver process to appear ...
	I0407 14:00:29.083303    9720 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:00:29.083303    9720 api_server.go:253] Checking apiserver healthz at https://172.17.92.89:8443/healthz ...
	I0407 14:00:29.090801    9720 api_server.go:279] https://172.17.92.89:8443/healthz returned 200:
	ok
	I0407 14:00:29.091782    9720 discovery_client.go:658] "Request Body" body=""
	I0407 14:00:29.091869    9720 round_trippers.go:470] GET https://172.17.92.89:8443/version
	I0407 14:00:29.091931    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:29.091931    9720 round_trippers.go:480]     Accept: application/json, */*
	I0407 14:00:29.091988    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:29.092819    9720 round_trippers.go:581] Response Status: 200 OK in 0 milliseconds
	I0407 14:00:29.092819    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:29.092819    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:29.093600    9720 round_trippers.go:587]     Content-Type: application/json
	I0407 14:00:29.093600    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:29.093600    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:29.093600    9720 round_trippers.go:587]     Content-Length: 263
	I0407 14:00:29.093600    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:29 GMT
	I0407 14:00:29.093600    9720 round_trippers.go:587]     Audit-Id: 5d4ced7a-737f-490c-9cfc-d0971ec5ea5d
	I0407 14:00:29.093677    9720 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0407 14:00:29.093798    9720 api_server.go:141] control plane version: v1.32.2
	I0407 14:00:29.093798    9720 api_server.go:131] duration metric: took 10.4952ms to wait for apiserver health ...
	I0407 14:00:29.093848    9720 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:00:29.093932    9720 type.go:204] "Request Body" body=""
	I0407 14:00:29.225480    9720 request.go:661] Waited for 131.5469ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods
	I0407 14:00:29.225480    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods
	I0407 14:00:29.225480    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:29.225480    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:29.225480    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:29.230660    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:29.230660    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:29.230660    9720 round_trippers.go:587]     Audit-Id: 5202cd0e-8087-462d-8c60-98115eedb8d7
	I0407 14:00:29.230660    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:29.230660    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:29.230660    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:29.230854    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:29.230854    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:29 GMT
	I0407 14:00:29.231852    9720 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 ca c6 02 0a  09 0a 00 12 03 34 35 33  |ist..........453|
		00000020  1a 00 12 ce 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 35 66 70  |s-668d6bf9bc-5fp|
		00000040  34 66 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |4f..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  34 33 37 32 32 36 61 65  |stem".*$437226ae|
		00000070  2d 65 36 33 64 2d 34 32  34 35 2d 62 62 65 61 2d  |-e63d-4245-bbea-|
		00000080  61 64 35 63 34 31 66 66  39 61 39 33 32 03 34 34  |ad5c41ff9a932.44|
		00000090  38 38 00 42 08 08 e6 b4  cf bf 06 10 00 5a 13 0a  |88.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 205550 chars]
	 >
	I0407 14:00:29.232688    9720 system_pods.go:59] 8 kube-system pods found
	I0407 14:00:29.232688    9720 system_pods.go:61] "coredns-668d6bf9bc-5fp4f" [437226ae-e63d-4245-bbea-ad5c41ff9a93] Running
	I0407 14:00:29.232688    9720 system_pods.go:61] "etcd-multinode-140200" [b345c5f2-e6c0-4efb-910e-b28966fe032d] Running
	I0407 14:00:29.232688    9720 system_pods.go:61] "kindnet-zkw9q" [123858da-6f70-4b10-b38e-bd930d21dbe4] Running
	I0407 14:00:29.232688    9720 system_pods.go:61] "kube-apiserver-multinode-140200" [0f782906-98c2-4ded-9a98-bc7b14350b06] Running
	I0407 14:00:29.232688    9720 system_pods.go:61] "kube-controller-manager-multinode-140200" [a7c6e3bb-197c-434e-9f19-74d7e48b50de] Running
	I0407 14:00:29.232688    9720 system_pods.go:61] "kube-proxy-9rx2d" [2eaab25d-fe0b-4c48-ac6b-42095f5fbce6] Running
	I0407 14:00:29.232688    9720 system_pods.go:61] "kube-scheduler-multinode-140200" [88dfeee8-a3c1-485b-abfe-9eaf0057d6cf] Running
	I0407 14:00:29.232688    9720 system_pods.go:61] "storage-provisioner" [01df03d8-8816-480c-941b-180069d26997] Running
	I0407 14:00:29.232688    9720 system_pods.go:74] duration metric: took 138.8386ms to wait for pod list to return data ...
	I0407 14:00:29.232688    9720 default_sa.go:34] waiting for default service account to be created ...
	I0407 14:00:29.232688    9720 type.go:204] "Request Body" body=""
	I0407 14:00:29.425155    9720 request.go:661] Waited for 192.4665ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/namespaces/default/serviceaccounts
	I0407 14:00:29.425155    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/default/serviceaccounts
	I0407 14:00:29.425155    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:29.425155    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:29.425155    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:29.430437    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:00:29.430437    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:29.430437    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:29.430437    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:29.430437    9720 round_trippers.go:587]     Content-Length: 128
	I0407 14:00:29.430437    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:29 GMT
	I0407 14:00:29.430437    9720 round_trippers.go:587]     Audit-Id: 69af2d88-00b9-4ec8-b1e5-957d1b521f0b
	I0407 14:00:29.430437    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:29.430437    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:29.430697    9720 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5c  |iceAccountList.\|
		00000020  0a 09 0a 00 12 03 34 35  33 1a 00 12 4f 0a 4d 0a  |......453...O.M.|
		00000030  07 64 65 66 61 75 6c 74  12 00 1a 07 64 65 66 61  |.default....defa|
		00000040  75 6c 74 22 00 2a 24 66  66 31 39 65 66 62 31 2d  |ult".*$ff19efb1-|
		00000050  63 35 63 63 2d 34 63 39  30 2d 62 63 36 61 2d 31  |c5cc-4c90-bc6a-1|
		00000060  36 33 38 65 32 62 61 39  39 37 38 32 03 33 33 34  |638e2ba99782.334|
		00000070  38 00 42 08 08 e5 b4 cf  bf 06 10 00 1a 00 22 00  |8.B...........".|
	 >
	I0407 14:00:29.430829    9720 default_sa.go:45] found service account: "default"
	I0407 14:00:29.430829    9720 default_sa.go:55] duration metric: took 198.1405ms for default service account to be created ...
	I0407 14:00:29.430829    9720 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 14:00:29.430986    9720 type.go:204] "Request Body" body=""
	I0407 14:00:29.625228    9720 request.go:661] Waited for 194.2407ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods
	I0407 14:00:29.625228    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods
	I0407 14:00:29.625228    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:29.625228    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:29.625228    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:29.630641    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:29.630641    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:29.630718    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:29.630718    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:29.630718    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:29.630718    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:29 GMT
	I0407 14:00:29.630718    9720 round_trippers.go:587]     Audit-Id: 751a9b48-4331-438a-b8bb-5004f2bf4133
	I0407 14:00:29.630718    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:29.632700    9720 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 ca c6 02 0a  09 0a 00 12 03 34 35 33  |ist..........453|
		00000020  1a 00 12 ce 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 35 66 70  |s-668d6bf9bc-5fp|
		00000040  34 66 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |4f..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  34 33 37 32 32 36 61 65  |stem".*$437226ae|
		00000070  2d 65 36 33 64 2d 34 32  34 35 2d 62 62 65 61 2d  |-e63d-4245-bbea-|
		00000080  61 64 35 63 34 31 66 66  39 61 39 33 32 03 34 34  |ad5c41ff9a932.44|
		00000090  38 38 00 42 08 08 e6 b4  cf bf 06 10 00 5a 13 0a  |88.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 205550 chars]
	 >
	I0407 14:00:29.632808    9720 system_pods.go:86] 8 kube-system pods found
	I0407 14:00:29.632808    9720 system_pods.go:89] "coredns-668d6bf9bc-5fp4f" [437226ae-e63d-4245-bbea-ad5c41ff9a93] Running
	I0407 14:00:29.632808    9720 system_pods.go:89] "etcd-multinode-140200" [b345c5f2-e6c0-4efb-910e-b28966fe032d] Running
	I0407 14:00:29.632808    9720 system_pods.go:89] "kindnet-zkw9q" [123858da-6f70-4b10-b38e-bd930d21dbe4] Running
	I0407 14:00:29.632808    9720 system_pods.go:89] "kube-apiserver-multinode-140200" [0f782906-98c2-4ded-9a98-bc7b14350b06] Running
	I0407 14:00:29.632808    9720 system_pods.go:89] "kube-controller-manager-multinode-140200" [a7c6e3bb-197c-434e-9f19-74d7e48b50de] Running
	I0407 14:00:29.632808    9720 system_pods.go:89] "kube-proxy-9rx2d" [2eaab25d-fe0b-4c48-ac6b-42095f5fbce6] Running
	I0407 14:00:29.632808    9720 system_pods.go:89] "kube-scheduler-multinode-140200" [88dfeee8-a3c1-485b-abfe-9eaf0057d6cf] Running
	I0407 14:00:29.632808    9720 system_pods.go:89] "storage-provisioner" [01df03d8-8816-480c-941b-180069d26997] Running
	I0407 14:00:29.632808    9720 system_pods.go:126] duration metric: took 201.9773ms to wait for k8s-apps to be running ...
	I0407 14:00:29.632808    9720 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 14:00:29.644427    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:00:29.667832    9720 system_svc.go:56] duration metric: took 35.0237ms WaitForService to wait for kubelet
	I0407 14:00:29.667832    9720 kubeadm.go:582] duration metric: took 24.4107006s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:00:29.667961    9720 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:00:29.668028    9720 type.go:204] "Request Body" body=""
	I0407 14:00:29.825136    9720 request.go:661] Waited for 157.0203ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/nodes
	I0407 14:00:29.825640    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes
	I0407 14:00:29.825676    9720 round_trippers.go:476] Request Headers:
	I0407 14:00:29.825676    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:00:29.825676    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:00:29.831166    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:00:29.831166    9720 round_trippers.go:584] Response Headers:
	I0407 14:00:29.831166    9720 round_trippers.go:587]     Audit-Id: ad3a26ca-a193-4c81-b3dc-8d9686819285
	I0407 14:00:29.831166    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:00:29.831166    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:00:29.831166    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:00:29.831166    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:00:29.831292    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:00:29 GMT
	I0407 14:00:29.831603    9720 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 cf 21 0a  09 0a 00 12 03 34 35 34  |List..!......454|
		00000020  1a 00 12 c1 21 0a fc 10  0a 10 6d 75 6c 74 69 6e  |....!.....multin|
		00000030  6f 64 65 2d 31 34 30 32  30 30 12 00 1a 00 22 00  |ode-140200....".|
		00000040  2a 24 31 66 35 33 62 34  63 64 2d 61 62 30 31 2d  |*$1f53b4cd-ab01-|
		00000050  34 32 63 61 2d 61 36 61  36 2d 61 39 33 65 66 63  |42ca-a6a6-a93efc|
		00000060  39 62 64 34 64 66 32 03  34 32 38 38 00 42 08 08  |9bd4df2.4288.B..|
		00000070  dd b4 cf bf 06 10 00 5a  20 0a 17 62 65 74 61 2e  |.......Z ..beta.|
		00000080  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		00000090  63 68 12 05 61 6d 64 36  34 5a 1e 0a 15 62 65 74  |ch..amd64Z...bet|
		000000a0  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		000000b0  6f 73 12 05 6c 69 6e 75  78 5a 1b 0a 12 6b 75 62  |os..linuxZ...kub|
		000000c0  65 72 6e 65 74 65 73 2e  69 6f 2f 61 72 63 68 12  |ernetes.io/arch [truncated 20379 chars]
	 >
	I0407 14:00:29.831877    9720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:00:29.831877    9720 node_conditions.go:123] node cpu capacity is 2
	I0407 14:00:29.831956    9720 node_conditions.go:105] duration metric: took 163.9937ms to run NodePressure ...
	I0407 14:00:29.831956    9720 start.go:241] waiting for startup goroutines ...
	I0407 14:00:29.832010    9720 start.go:246] waiting for cluster config update ...
	I0407 14:00:29.832031    9720 start.go:255] writing updated cluster config ...
	I0407 14:00:29.836685    9720 out.go:201] 
	I0407 14:00:29.838920    9720 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:00:29.853385    9720 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:00:29.853755    9720 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:00:29.861360    9720 out.go:177] * Starting "multinode-140200-m02" worker node in "multinode-140200" cluster
	I0407 14:00:29.864944    9720 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 14:00:29.864944    9720 cache.go:56] Caching tarball of preloaded images
	I0407 14:00:29.865548    9720 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 14:00:29.865548    9720 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 14:00:29.865548    9720 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:00:29.873929    9720 start.go:360] acquireMachinesLock for multinode-140200-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:00:29.874373    9720 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-140200-m02"
	I0407 14:00:29.874373    9720 start.go:93] Provisioning new machine with config: &{Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Clus
terName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.92.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0407 14:00:29.874373    9720 start.go:125] createHost starting for "m02" (driver="hyperv")
	I0407 14:00:29.877902    9720 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 14:00:29.878744    9720 start.go:159] libmachine.API.Create for "multinode-140200" (driver="hyperv")
	I0407 14:00:29.878744    9720 client.go:168] LocalClient.Create starting
	I0407 14:00:29.878744    9720 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 14:00:29.878744    9720 main.go:141] libmachine: Decoding PEM data...
	I0407 14:00:29.878744    9720 main.go:141] libmachine: Parsing certificate...
	I0407 14:00:29.878744    9720 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 14:00:29.878744    9720 main.go:141] libmachine: Decoding PEM data...
	I0407 14:00:29.878744    9720 main.go:141] libmachine: Parsing certificate...
	I0407 14:00:29.878744    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 14:00:31.819634    9720 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 14:00:31.819634    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:31.819701    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 14:00:33.586912    9720 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 14:00:33.587177    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:33.587256    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 14:00:35.105285    9720 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 14:00:35.105285    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:35.105371    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 14:00:38.854127    9720 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 14:00:38.854127    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:38.857668    9720 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 14:00:39.404683    9720 main.go:141] libmachine: Creating SSH key...
	I0407 14:00:39.749614    9720 main.go:141] libmachine: Creating VM...
	I0407 14:00:39.749614    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 14:00:42.766172    9720 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 14:00:42.766172    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:42.767082    9720 main.go:141] libmachine: Using switch "Default Switch"
	I0407 14:00:42.767195    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 14:00:44.521063    9720 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 14:00:44.521150    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:44.521378    9720 main.go:141] libmachine: Creating VHD
	I0407 14:00:44.521378    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 14:00:48.357747    9720 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\fixed
	                          .vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 8B523569-E535-4BEC-A1EC-820AB2F2AC88
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 14:00:48.357747    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:48.357747    9720 main.go:141] libmachine: Writing magic tar header
	I0407 14:00:48.357747    9720 main.go:141] libmachine: Writing SSH key tar header
	I0407 14:00:48.370720    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 14:00:51.582373    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:00:51.583070    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:51.583070    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\disk.vhd' -SizeBytes 20000MB
	I0407 14:00:54.105022    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:00:54.105903    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:54.105903    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM multinode-140200-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0407 14:00:57.761618    9720 main.go:141] libmachine: [stdout =====>] : 
	Name                 State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                 ----- ----------- ----------------- ------   ------             -------
	multinode-140200-m02 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 14:00:57.761618    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:57.761872    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName multinode-140200-m02 -DynamicMemoryEnabled $false
	I0407 14:00:59.998903    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:00:59.998903    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:00:59.999742    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor multinode-140200-m02 -Count 2
	I0407 14:01:02.243729    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:01:02.244907    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:02.244984    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName multinode-140200-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\boot2docker.iso'
	I0407 14:01:04.832627    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:01:04.833166    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:04.833308    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName multinode-140200-m02 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\disk.vhd'
	I0407 14:01:07.452011    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:01:07.453033    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:07.453033    9720 main.go:141] libmachine: Starting VM...
	I0407 14:01:07.453073    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-140200-m02
	I0407 14:01:10.530225    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:01:10.530473    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:10.530473    9720 main.go:141] libmachine: Waiting for host to start...
	I0407 14:01:10.530580    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:12.889968    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:12.889968    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:12.891064    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:01:15.423376    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:01:15.423376    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:16.424515    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:18.679580    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:18.679580    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:18.679718    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:01:21.200189    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:01:21.200189    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:22.201549    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:24.443031    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:24.443031    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:24.443273    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:01:26.997344    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:01:26.997344    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:27.997817    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:30.235496    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:30.236815    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:30.236876    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:01:32.821668    9720 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:01:32.821668    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:33.822945    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:36.083664    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:36.084532    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:36.084591    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:01:38.672504    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:01:38.672504    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:38.673047    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:40.885951    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:40.886166    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:40.886166    9720 machine.go:93] provisionDockerMachine start ...
	I0407 14:01:40.886299    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:43.052797    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:43.052797    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:43.052797    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:01:45.613694    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:01:45.613759    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:45.618152    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:45.634667    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.40 22 <nil> <nil>}
	I0407 14:01:45.634667    9720 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:01:45.758446    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:01:45.758446    9720 buildroot.go:166] provisioning hostname "multinode-140200-m02"
	I0407 14:01:45.758446    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:47.882346    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:47.882820    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:47.882890    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:01:50.435476    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:01:50.435476    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:50.443371    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:50.443669    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.40 22 <nil> <nil>}
	I0407 14:01:50.443669    9720 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-140200-m02 && echo "multinode-140200-m02" | sudo tee /etc/hostname
	I0407 14:01:50.601003    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-140200-m02
	
	I0407 14:01:50.601187    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:52.762271    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:52.762271    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:52.762374    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:01:55.281099    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:01:55.281267    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:55.286559    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 14:01:55.287267    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.40 22 <nil> <nil>}
	I0407 14:01:55.287267    9720 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-140200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-140200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-140200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:01:55.431485    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:01:55.432467    9720 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 14:01:55.432467    9720 buildroot.go:174] setting up certificates
	I0407 14:01:55.432467    9720 provision.go:84] configureAuth start
	I0407 14:01:55.432467    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:01:57.617059    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:01:57.617790    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:01:57.617790    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:00.182112    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:00.182288    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:00.182288    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:02.337689    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:02.338401    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:02.338527    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:04.945443    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:04.945443    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:04.946396    9720 provision.go:143] copyHostCerts
	I0407 14:02:04.946551    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 14:02:04.946606    9720 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 14:02:04.946606    9720 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 14:02:04.947207    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 14:02:04.947897    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 14:02:04.948457    9720 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 14:02:04.948649    9720 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 14:02:04.948919    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 14:02:04.949921    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 14:02:04.950082    9720 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 14:02:04.950082    9720 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 14:02:04.950082    9720 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 14:02:04.951598    9720 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-140200-m02 san=[127.0.0.1 172.17.82.40 localhost minikube multinode-140200-m02]
	I0407 14:02:04.992219    9720 provision.go:177] copyRemoteCerts
	I0407 14:02:05.003221    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:02:05.003221    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:07.156950    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:07.157802    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:07.157851    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:09.692574    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:09.693037    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:09.693240    9720 sshutil.go:53] new ssh client: &{IP:172.17.82.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\id_rsa Username:docker}
	I0407 14:02:09.801834    9720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.7984791s)
	I0407 14:02:09.801899    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 14:02:09.801899    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:02:09.844664    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 14:02:09.844664    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0407 14:02:09.888823    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 14:02:09.889269    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 14:02:09.930403    9720 provision.go:87] duration metric: took 14.4978328s to configureAuth
	I0407 14:02:09.930403    9720 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:02:09.931397    9720 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:02:09.931397    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:12.047801    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:12.048881    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:12.048985    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:14.620768    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:14.620768    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:14.626587    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 14:02:14.626720    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.40 22 <nil> <nil>}
	I0407 14:02:14.626720    9720 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 14:02:14.756279    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 14:02:14.756279    9720 buildroot.go:70] root file system type: tmpfs
	I0407 14:02:14.756279    9720 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 14:02:14.756897    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:16.924363    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:16.924363    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:16.924551    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:19.478976    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:19.479367    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:19.485086    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 14:02:19.485674    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.40 22 <nil> <nil>}
	I0407 14:02:19.485806    9720 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.92.89"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 14:02:19.645415    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.92.89
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 14:02:19.645538    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:21.812502    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:21.813528    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:21.813681    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:24.420802    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:24.420802    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:24.426126    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 14:02:24.426920    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.40 22 <nil> <nil>}
	I0407 14:02:24.426920    9720 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 14:02:26.652386    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 14:02:26.652386    9720 machine.go:96] duration metric: took 45.7658953s to provisionDockerMachine
	I0407 14:02:26.652386    9720 client.go:171] duration metric: took 1m56.7728202s to LocalClient.Create
	I0407 14:02:26.652386    9720 start.go:167] duration metric: took 1m56.7728202s to libmachine.API.Create "multinode-140200"
	I0407 14:02:26.652386    9720 start.go:293] postStartSetup for "multinode-140200-m02" (driver="hyperv")
	I0407 14:02:26.652386    9720 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:02:26.663939    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:02:26.663939    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:28.809566    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:28.810367    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:28.810476    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:31.432283    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:31.432283    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:31.433223    9720 sshutil.go:53] new ssh client: &{IP:172.17.82.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\id_rsa Username:docker}
	I0407 14:02:31.535460    9720 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.8714093s)
	I0407 14:02:31.546790    9720 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:02:31.557017    9720 command_runner.go:130] > NAME=Buildroot
	I0407 14:02:31.557017    9720 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0407 14:02:31.557507    9720 command_runner.go:130] > ID=buildroot
	I0407 14:02:31.557507    9720 command_runner.go:130] > VERSION_ID=2023.02.9
	I0407 14:02:31.557507    9720 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0407 14:02:31.557507    9720 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:02:31.557507    9720 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 14:02:31.558120    9720 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 14:02:31.558801    9720 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 14:02:31.558801    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 14:02:31.570898    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:02:31.592336    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 14:02:31.641275    9720 start.go:296] duration metric: took 4.9888532s for postStartSetup
	I0407 14:02:31.644291    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:33.802094    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:33.802094    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:33.803070    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:36.397698    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:36.397698    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:36.397698    9720 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:02:36.400530    9720 start.go:128] duration metric: took 2m6.5252643s to createHost
	I0407 14:02:36.400530    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:38.567050    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:38.567050    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:38.567050    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:41.117814    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:41.117814    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:41.122962    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 14:02:41.123615    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.40 22 <nil> <nil>}
	I0407 14:02:41.123615    9720 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:02:41.257903    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744034561.278110903
	
	I0407 14:02:41.257903    9720 fix.go:216] guest clock: 1744034561.278110903
	I0407 14:02:41.257903    9720 fix.go:229] Guest: 2025-04-07 14:02:41.278110903 +0000 UTC Remote: 2025-04-07 14:02:36.4005302 +0000 UTC m=+339.690820701 (delta=4.877580703s)
	I0407 14:02:41.258072    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:43.415775    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:43.415775    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:43.415775    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:45.961653    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:45.961653    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:45.967052    9720 main.go:141] libmachine: Using SSH client type: native
	I0407 14:02:45.967741    9720 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.82.40 22 <nil> <nil>}
	I0407 14:02:45.967741    9720 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744034561
	I0407 14:02:46.108373    9720 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 14:02:41 UTC 2025
	
	I0407 14:02:46.108373    9720 fix.go:236] clock set: Mon Apr  7 14:02:41 UTC 2025
	 (err=<nil>)
	I0407 14:02:46.108373    9720 start.go:83] releasing machines lock for "multinode-140200-m02", held for 2m16.2330378s
	I0407 14:02:46.108954    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:48.252420    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:48.252420    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:48.252420    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:50.809396    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:50.809648    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:50.812936    9720 out.go:177] * Found network options:
	I0407 14:02:50.815773    9720 out.go:177]   - NO_PROXY=172.17.92.89
	W0407 14:02:50.818008    9720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 14:02:50.820398    9720 out.go:177]   - NO_PROXY=172.17.92.89
	W0407 14:02:50.823323    9720 proxy.go:119] fail to check proxy env: Error ip not in block
	W0407 14:02:50.826136    9720 proxy.go:119] fail to check proxy env: Error ip not in block
	I0407 14:02:50.828541    9720 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 14:02:50.828641    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:50.837762    9720 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 14:02:50.837762    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:02:53.090798    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:53.090968    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:53.091268    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:53.093217    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:02:53.093217    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:53.093308    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:02:55.727997    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:55.728696    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:55.728894    9720 sshutil.go:53] new ssh client: &{IP:172.17.82.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\id_rsa Username:docker}
	I0407 14:02:55.750629    9720 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:02:55.750629    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:02:55.750629    9720 sshutil.go:53] new ssh client: &{IP:172.17.82.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\id_rsa Username:docker}
	I0407 14:02:55.827548    9720 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0407 14:02:55.827798    9720 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (4.9992206s)
	W0407 14:02:55.827798    9720 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 14:02:55.855165    9720 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	I0407 14:02:55.855873    9720 ssh_runner.go:235] Completed: sh -c "stat /etc/cni/net.d/*loopback.conf*": (5.0180748s)
	W0407 14:02:55.855956    9720 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:02:55.867269    9720 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:02:55.896470    9720 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0407 14:02:55.896470    9720 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 14:02:55.896470    9720 start.go:495] detecting cgroup driver to use...
	I0407 14:02:55.896470    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:02:55.942710    9720 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	W0407 14:02:55.947933    9720 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 14:02:55.947933    9720 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 14:02:55.955189    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 14:02:55.987281    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 14:02:56.007140    9720 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 14:02:56.021295    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 14:02:56.056656    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 14:02:56.089987    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 14:02:56.120474    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 14:02:56.153458    9720 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:02:56.184711    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 14:02:56.215323    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 14:02:56.244689    9720 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 14:02:56.273686    9720 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:02:56.292738    9720 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:02:56.293086    9720 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:02:56.305486    9720 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 14:02:56.341016    9720 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:02:56.366169    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:02:56.574825    9720 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 14:02:56.604300    9720 start.go:495] detecting cgroup driver to use...
	I0407 14:02:56.614997    9720 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 14:02:56.639199    9720 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0407 14:02:56.639303    9720 command_runner.go:130] > [Unit]
	I0407 14:02:56.639303    9720 command_runner.go:130] > Description=Docker Application Container Engine
	I0407 14:02:56.639303    9720 command_runner.go:130] > Documentation=https://docs.docker.com
	I0407 14:02:56.639303    9720 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0407 14:02:56.639303    9720 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0407 14:02:56.639382    9720 command_runner.go:130] > StartLimitBurst=3
	I0407 14:02:56.639382    9720 command_runner.go:130] > StartLimitIntervalSec=60
	I0407 14:02:56.639382    9720 command_runner.go:130] > [Service]
	I0407 14:02:56.639422    9720 command_runner.go:130] > Type=notify
	I0407 14:02:56.639422    9720 command_runner.go:130] > Restart=on-failure
	I0407 14:02:56.639422    9720 command_runner.go:130] > Environment=NO_PROXY=172.17.92.89
	I0407 14:02:56.639422    9720 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0407 14:02:56.639422    9720 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0407 14:02:56.639422    9720 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0407 14:02:56.639422    9720 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0407 14:02:56.639422    9720 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0407 14:02:56.639422    9720 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0407 14:02:56.639422    9720 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0407 14:02:56.639422    9720 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0407 14:02:56.639422    9720 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0407 14:02:56.639422    9720 command_runner.go:130] > ExecStart=
	I0407 14:02:56.639422    9720 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0407 14:02:56.639599    9720 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0407 14:02:56.639599    9720 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0407 14:02:56.639599    9720 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0407 14:02:56.639599    9720 command_runner.go:130] > LimitNOFILE=infinity
	I0407 14:02:56.639599    9720 command_runner.go:130] > LimitNPROC=infinity
	I0407 14:02:56.639599    9720 command_runner.go:130] > LimitCORE=infinity
	I0407 14:02:56.639599    9720 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0407 14:02:56.639599    9720 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0407 14:02:56.639599    9720 command_runner.go:130] > TasksMax=infinity
	I0407 14:02:56.639599    9720 command_runner.go:130] > TimeoutStartSec=0
	I0407 14:02:56.639599    9720 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0407 14:02:56.639599    9720 command_runner.go:130] > Delegate=yes
	I0407 14:02:56.639599    9720 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0407 14:02:56.639599    9720 command_runner.go:130] > KillMode=process
	I0407 14:02:56.639599    9720 command_runner.go:130] > [Install]
	I0407 14:02:56.639599    9720 command_runner.go:130] > WantedBy=multi-user.target
	I0407 14:02:56.651430    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:02:56.686031    9720 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:02:56.721899    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:02:56.754386    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 14:02:56.796500    9720 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 14:02:56.856618    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 14:02:56.879429    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:02:56.914423    9720 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0407 14:02:56.926679    9720 ssh_runner.go:195] Run: which cri-dockerd
	I0407 14:02:56.931661    9720 command_runner.go:130] > /usr/bin/cri-dockerd
	I0407 14:02:56.942791    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 14:02:56.959584    9720 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 14:02:57.003731    9720 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 14:02:57.210532    9720 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 14:02:57.409628    9720 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 14:02:57.409628    9720 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 14:02:57.452767    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:02:57.640550    9720 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 14:03:00.206185    9720 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.5656173s)
	I0407 14:03:00.219951    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 14:03:00.255962    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 14:03:00.291268    9720 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 14:03:00.490903    9720 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 14:03:00.699924    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:03:00.896497    9720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 14:03:00.936176    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 14:03:00.970035    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:03:01.170413    9720 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 14:03:01.276488    9720 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 14:03:01.286817    9720 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 14:03:01.294455    9720 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0407 14:03:01.294599    9720 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0407 14:03:01.294599    9720 command_runner.go:130] > Device: 0,22	Inode: 875         Links: 1
	I0407 14:03:01.294682    9720 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0407 14:03:01.294682    9720 command_runner.go:130] > Access: 2025-04-07 14:03:01.219040257 +0000
	I0407 14:03:01.294682    9720 command_runner.go:130] > Modify: 2025-04-07 14:03:01.219040257 +0000
	I0407 14:03:01.294682    9720 command_runner.go:130] > Change: 2025-04-07 14:03:01.223040272 +0000
	I0407 14:03:01.294682    9720 command_runner.go:130] >  Birth: -
	I0407 14:03:01.294974    9720 start.go:563] Will wait 60s for crictl version
	I0407 14:03:01.304814    9720 ssh_runner.go:195] Run: which crictl
	I0407 14:03:01.312024    9720 command_runner.go:130] > /usr/bin/crictl
	I0407 14:03:01.323276    9720 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:03:01.375398    9720 command_runner.go:130] > Version:  0.1.0
	I0407 14:03:01.375529    9720 command_runner.go:130] > RuntimeName:  docker
	I0407 14:03:01.375529    9720 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0407 14:03:01.375556    9720 command_runner.go:130] > RuntimeApiVersion:  v1
	I0407 14:03:01.375556    9720 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 14:03:01.385131    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 14:03:01.416884    9720 command_runner.go:130] > 27.4.0
	I0407 14:03:01.427656    9720 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 14:03:01.464141    9720 command_runner.go:130] > 27.4.0
	I0407 14:03:01.468500    9720 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 14:03:01.472961    9720 out.go:177]   - env NO_PROXY=172.17.92.89
	I0407 14:03:01.475492    9720 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 14:03:01.479595    9720 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 14:03:01.479756    9720 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 14:03:01.479756    9720 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 14:03:01.479756    9720 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 14:03:01.482015    9720 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 14:03:01.482015    9720 ip.go:214] interface addr: 172.17.80.1/20
	I0407 14:03:01.492382    9720 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 14:03:01.503062    9720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:03:01.523887    9720 mustload.go:65] Loading cluster: multinode-140200
	I0407 14:03:01.524666    9720 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:03:01.525569    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:03:03.653460    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:03:03.653460    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:03:03.653460    9720 host.go:66] Checking if "multinode-140200" exists ...
	I0407 14:03:03.655114    9720 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200 for IP: 172.17.82.40
	I0407 14:03:03.655226    9720 certs.go:194] generating shared ca certs ...
	I0407 14:03:03.655226    9720 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:03:03.655979    9720 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 14:03:03.656389    9720 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 14:03:03.656389    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 14:03:03.656389    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 14:03:03.656930    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 14:03:03.657190    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 14:03:03.657681    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 14:03:03.657835    9720 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 14:03:03.657835    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 14:03:03.657835    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 14:03:03.658657    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 14:03:03.658976    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 14:03:03.659506    9720 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 14:03:03.659800    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:03:03.660061    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 14:03:03.660061    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 14:03:03.660061    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:03:03.706922    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:03:03.750036    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:03:03.791918    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:03:03.833934    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:03:03.878171    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 14:03:03.921644    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 14:03:03.978478    9720 ssh_runner.go:195] Run: openssl version
	I0407 14:03:03.986693    9720 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0407 14:03:03.997409    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:03:04.028281    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:03:04.036318    9720 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:03:04.036657    9720 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:03:04.054164    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:03:04.063440    9720 command_runner.go:130] > b5213941
	I0407 14:03:04.074040    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:03:04.102867    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 14:03:04.130575    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 14:03:04.137844    9720 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 14:03:04.138650    9720 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 14:03:04.149870    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 14:03:04.158783    9720 command_runner.go:130] > 51391683
	I0407 14:03:04.171532    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 14:03:04.202933    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 14:03:04.232089    9720 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 14:03:04.239446    9720 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 14:03:04.239506    9720 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 14:03:04.249803    9720 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 14:03:04.258104    9720 command_runner.go:130] > 3ec20f2e
	I0407 14:03:04.269596    9720 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:03:04.300787    9720 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:03:04.306322    9720 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 14:03:04.306936    9720 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0407 14:03:04.306936    9720 kubeadm.go:934] updating node {m02 172.17.82.40 8443 v1.32.2 docker false true} ...
	I0407 14:03:04.306936    9720 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-140200-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.82.40
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:03:04.317627    9720 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 14:03:04.334194    9720 command_runner.go:130] ! ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	I0407 14:03:04.334665    9720 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.32.2: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.32.2': No such file or directory
	
	Initiating transfer...
	I0407 14:03:04.345934    9720 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.32.2
	I0407 14:03:04.366714    9720 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubeadm.sha256
	I0407 14:03:04.366714    9720 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubectl.sha256
	I0407 14:03:04.366714    9720 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/linux/amd64/kubelet.sha256
	I0407 14:03:04.366839    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl -> /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 14:03:04.366839    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm -> /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 14:03:04.379196    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:03:04.380992    9720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl
	I0407 14:03:04.381425    9720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm
	I0407 14:03:04.404110    9720 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0407 14:03:04.404110    9720 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubectl: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubectl': No such file or directory
	I0407 14:03:04.404110    9720 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet -> /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 14:03:04.404329    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubectl --> /var/lib/minikube/binaries/v1.32.2/kubectl (57323672 bytes)
	I0407 14:03:04.404392    9720 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0407 14:03:04.404392    9720 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubeadm: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubeadm': No such file or directory
	I0407 14:03:04.404392    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubeadm --> /var/lib/minikube/binaries/v1.32.2/kubeadm (70942872 bytes)
	I0407 14:03:04.415472    9720 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet
	I0407 14:03:04.476704    9720 command_runner.go:130] ! stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0407 14:03:04.476834    9720 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.32.2/kubelet: stat -c "%s %y" /var/lib/minikube/binaries/v1.32.2/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.32.2/kubelet': No such file or directory
	I0407 14:03:04.476834    9720 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\linux\amd64\v1.32.2/kubelet --> /var/lib/minikube/binaries/v1.32.2/kubelet (77406468 bytes)
	I0407 14:03:05.742848    9720 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system
	I0407 14:03:05.758859    9720 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (319 bytes)
	I0407 14:03:05.788079    9720 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:03:05.827223    9720 ssh_runner.go:195] Run: grep 172.17.92.89	control-plane.minikube.internal$ /etc/hosts
	I0407 14:03:05.833047    9720 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.92.89	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:03:05.871351    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:03:06.079302    9720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:03:06.106230    9720 host.go:66] Checking if "multinode-140200" exists ...
	I0407 14:03:06.107224    9720 start.go:317] joinCluster: &{Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-140200
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.92.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:03:06.107224    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0407 14:03:06.107224    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:03:08.236361    9720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:03:08.236568    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:03:08.236653    9720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:03:10.809933    9720 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 14:03:10.809933    9720 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:03:10.810745    9720 sshutil.go:53] new ssh client: &{IP:172.17.92.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:03:11.176847    9720 command_runner.go:130] > kubeadm join control-plane.minikube.internal:8443 --token rywgfn.kr1hd6wofkn0yere --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 
	I0407 14:03:11.176847    9720 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm token create --print-join-command --ttl=0": (5.0695863s)
	I0407 14:03:11.176847    9720 start.go:343] trying to join worker node "m02" to cluster: &{Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0407 14:03:11.176847    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rywgfn.kr1hd6wofkn0yere --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-140200-m02"
	I0407 14:03:11.370138    9720 command_runner.go:130] ! 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0407 14:03:12.244493    9720 command_runner.go:130] > [preflight] Running pre-flight checks
	I0407 14:03:12.244698    9720 command_runner.go:130] > [preflight] Reading configuration from the "kubeadm-config" ConfigMap in namespace "kube-system"...
	I0407 14:03:12.244698    9720 command_runner.go:130] > [preflight] Use 'kubeadm init phase upload-config --config your-config.yaml' to re-upload it.
	I0407 14:03:12.244698    9720 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:03:12.244698    9720 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:03:12.244698    9720 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0407 14:03:12.244869    9720 command_runner.go:130] > [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 14:03:12.244869    9720 command_runner.go:130] > [kubelet-check] The kubelet is healthy after 501.392632ms
	I0407 14:03:12.244869    9720 command_runner.go:130] > [kubelet-start] Waiting for the kubelet to perform the TLS Bootstrap
	I0407 14:03:12.244869    9720 command_runner.go:130] > This node has joined the cluster:
	I0407 14:03:12.244869    9720 command_runner.go:130] > * Certificate signing request was sent to apiserver and a response was received.
	I0407 14:03:12.244869    9720 command_runner.go:130] > * The Kubelet was informed of the new secure connection details.
	I0407 14:03:12.244869    9720 command_runner.go:130] > Run 'kubectl get nodes' on the control-plane to see this node join the cluster.
	I0407 14:03:12.245027    9720 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm join control-plane.minikube.internal:8443 --token rywgfn.kr1hd6wofkn0yere --discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 --ignore-preflight-errors=all --cri-socket unix:///var/run/cri-dockerd.sock --node-name=multinode-140200-m02": (1.0681718s)
	I0407 14:03:12.245027    9720 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0407 14:03:12.450397    9720 command_runner.go:130] ! Created symlink /etc/systemd/system/multi-user.target.wants/kubelet.service → /usr/lib/systemd/system/kubelet.service.
	I0407 14:03:12.635938    9720 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes multinode-140200-m02 minikube.k8s.io/updated_at=2025_04_07T14_03_12_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=multinode-140200 minikube.k8s.io/primary=false
	I0407 14:03:12.775362    9720 command_runner.go:130] > node/multinode-140200-m02 labeled
	I0407 14:03:12.775471    9720 start.go:319] duration metric: took 6.6681989s to joinCluster
	I0407 14:03:12.775607    9720 start.go:235] Will wait 6m0s for node &{Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}
	I0407 14:03:12.776242    9720 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:03:12.778550    9720 out.go:177] * Verifying Kubernetes components...
	I0407 14:03:12.794373    9720 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:03:12.990509    9720 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:03:13.016532    9720 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:03:13.017246    9720 kapi.go:59] client config for multinode-140200: &rest.Config{Host:"https://172.17.92.89:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CADa
ta:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 14:03:13.018444    9720 node_ready.go:35] waiting up to 6m0s for node "multinode-140200-m02" to be "Ready" ...
	I0407 14:03:13.018660    9720 type.go:168] "Request Body" body=""
	I0407 14:03:13.018742    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:13.018742    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:13.018742    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:13.018790    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:13.029097    9720 round_trippers.go:581] Response Status: 200 OK in 10 milliseconds
	I0407 14:03:13.029097    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:13.029097    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:13.029097    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:13.029097    9720 round_trippers.go:587]     Content-Length: 2718
	I0407 14:03:13.029097    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:13 GMT
	I0407 14:03:13.029097    9720 round_trippers.go:587]     Audit-Id: 177fa042-ea87-4235-9c6f-fe620800887f
	I0407 14:03:13.029097    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:13.029097    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:13.030265    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 87 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 30 38 00 42  |f2f300172.6108.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12404 chars]
	 >
	I0407 14:03:13.519357    9720 type.go:168] "Request Body" body=""
	I0407 14:03:13.519754    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:13.519754    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:13.519754    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:13.519754    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:13.523402    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:13.523402    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:13.523402    9720 round_trippers.go:587]     Audit-Id: 51437ebd-7aa7-4fe1-9e4a-0ee6b0b41072
	I0407 14:03:13.523511    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:13.523511    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:13.523511    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:13.523511    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:13.523511    9720 round_trippers.go:587]     Content-Length: 2718
	I0407 14:03:13.523511    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:13 GMT
	I0407 14:03:13.523736    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 87 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 30 38 00 42  |f2f300172.6108.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12404 chars]
	 >
	I0407 14:03:14.019424    9720 type.go:168] "Request Body" body=""
	I0407 14:03:14.019424    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:14.019424    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:14.019424    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:14.019424    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:14.045814    9720 round_trippers.go:581] Response Status: 200 OK in 26 milliseconds
	I0407 14:03:14.045814    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:14.045814    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:14.045814    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:14.045814    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:14.045938    9720 round_trippers.go:587]     Content-Length: 2718
	I0407 14:03:14.045938    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:14 GMT
	I0407 14:03:14.045938    9720 round_trippers.go:587]     Audit-Id: 6396f5b2-8ab6-4ae8-b366-2c3519a776db
	I0407 14:03:14.045938    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:14.046057    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 87 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 30 38 00 42  |f2f300172.6108.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12404 chars]
	 >
	I0407 14:03:14.518627    9720 type.go:168] "Request Body" body=""
	I0407 14:03:14.518627    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:14.518627    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:14.518627    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:14.518627    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:14.524781    9720 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:03:14.524781    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:14.524781    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:14.524781    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:14.524894    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:14.524894    9720 round_trippers.go:587]     Content-Length: 2718
	I0407 14:03:14.524894    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:14 GMT
	I0407 14:03:14.524894    9720 round_trippers.go:587]     Audit-Id: caf2f4ae-5ba8-4400-a9fe-141684324461
	I0407 14:03:14.524894    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:14.525110    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 87 15 0a 9c 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 30 38 00 42  |f2f300172.6108.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12404 chars]
	 >
	I0407 14:03:15.018601    9720 type.go:168] "Request Body" body=""
	I0407 14:03:15.018601    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:15.018601    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:15.018601    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:15.018601    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:15.022943    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:15.022943    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:15.022943    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:15.022943    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:15.022943    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:15.022943    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:15.022943    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:15.022943    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:15 GMT
	I0407 14:03:15.022943    9720 round_trippers.go:587]     Audit-Id: 3778fec0-a3cb-4dc4-97bd-6614a2b9f25f
	I0407 14:03:15.023223    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:15.023223    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:15.519882    9720 type.go:168] "Request Body" body=""
	I0407 14:03:15.519882    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:15.519882    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:15.519882    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:15.519882    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:15.523559    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:15.523559    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:15.523623    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:15.523623    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:15.523623    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:15.523623    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:15.523623    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:15.523623    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:15 GMT
	I0407 14:03:15.523623    9720 round_trippers.go:587]     Audit-Id: b5c6f1ae-1017-40eb-8b7e-9e32c01e0bd2
	I0407 14:03:15.523668    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:16.019764    9720 type.go:168] "Request Body" body=""
	I0407 14:03:16.019907    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:16.019907    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:16.019907    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:16.019907    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:16.023689    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:16.023787    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:16.023787    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:16.023787    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:16.023787    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:16.023787    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:16.023862    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:16.023862    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:16 GMT
	I0407 14:03:16.023862    9720 round_trippers.go:587]     Audit-Id: 1c72270c-c0d8-43e0-b69f-f14b28b06510
	I0407 14:03:16.023975    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:16.518886    9720 type.go:168] "Request Body" body=""
	I0407 14:03:16.519498    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:16.519498    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:16.519498    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:16.519498    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:16.522619    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:16.522714    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:16.522714    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:16.522714    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:16.522714    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:16.522714    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:16.522783    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:16 GMT
	I0407 14:03:16.522783    9720 round_trippers.go:587]     Audit-Id: ded31d71-5e2e-43f7-8a7e-27b2901ded7b
	I0407 14:03:16.522801    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:16.522962    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:17.019085    9720 type.go:168] "Request Body" body=""
	I0407 14:03:17.019085    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:17.019085    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:17.019085    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:17.019085    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:17.023595    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:17.023692    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:17.023692    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:17.023692    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:17.023692    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:17.023757    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:17.023757    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:17.023757    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:17 GMT
	I0407 14:03:17.023757    9720 round_trippers.go:587]     Audit-Id: 0a8786df-d691-454a-877d-e0fc47a449de
	I0407 14:03:17.023757    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:17.023757    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:17.519376    9720 type.go:168] "Request Body" body=""
	I0407 14:03:17.519806    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:17.519806    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:17.519806    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:17.519806    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:17.524396    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:17.524396    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:17.524396    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:17 GMT
	I0407 14:03:17.524396    9720 round_trippers.go:587]     Audit-Id: 3b0b9f4f-2760-405c-ad8d-202278b43e67
	I0407 14:03:17.524396    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:17.524396    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:17.524396    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:17.524396    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:17.524396    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:17.524396    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:18.019325    9720 type.go:168] "Request Body" body=""
	I0407 14:03:18.019325    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:18.019325    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:18.019325    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:18.019325    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:18.023745    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:18.023848    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:18.023848    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:18.023848    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:18.023848    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:18.023848    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:18 GMT
	I0407 14:03:18.023848    9720 round_trippers.go:587]     Audit-Id: 7e3590b7-bd0f-4f30-9144-f28400a3a6b5
	I0407 14:03:18.023848    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:18.023848    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:18.023916    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:18.519926    9720 type.go:168] "Request Body" body=""
	I0407 14:03:18.520558    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:18.520558    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:18.520558    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:18.520558    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:18.524387    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:18.524446    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:18.524446    9720 round_trippers.go:587]     Audit-Id: ccdcdf33-f188-40c2-a9bc-35a2debff824
	I0407 14:03:18.524446    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:18.524446    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:18.524446    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:18.524446    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:18.524446    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:18.524446    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:18 GMT
	I0407 14:03:18.524805    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:19.018729    9720 type.go:168] "Request Body" body=""
	I0407 14:03:19.018729    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:19.018729    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:19.018729    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:19.018729    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:19.023290    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:19.023290    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:19.023290    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:19 GMT
	I0407 14:03:19.023290    9720 round_trippers.go:587]     Audit-Id: e09f7e0b-392f-460e-9f48-bdf0e9387450
	I0407 14:03:19.023290    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:19.023290    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:19.023290    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:19.023290    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:19.023290    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:19.023652    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:19.519392    9720 type.go:168] "Request Body" body=""
	I0407 14:03:19.519392    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:19.519392    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:19.519392    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:19.519392    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:19.523541    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:19.523541    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:19.523541    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:19.523541    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:19.523541    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:19.523541    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:19.523541    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:19 GMT
	I0407 14:03:19.523541    9720 round_trippers.go:587]     Audit-Id: 7266b800-d39a-4691-a181-d7a7cf19d682
	I0407 14:03:19.523541    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:19.523541    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:19.524113    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:20.019378    9720 type.go:168] "Request Body" body=""
	I0407 14:03:20.019378    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:20.019378    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:20.019378    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:20.019378    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:20.023712    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:20.023712    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:20.023712    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:20.023712    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:20.023712    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:20 GMT
	I0407 14:03:20.023712    9720 round_trippers.go:587]     Audit-Id: dc4e64ca-96d1-4c8e-8f24-320f1b29845f
	I0407 14:03:20.023712    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:20.023712    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:20.023712    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:20.023712    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:20.519629    9720 type.go:168] "Request Body" body=""
	I0407 14:03:20.519629    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:20.519629    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:20.519629    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:20.519629    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:20.523427    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:20.523490    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:20.523490    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:20.523548    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:20.523548    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:20 GMT
	I0407 14:03:20.523548    9720 round_trippers.go:587]     Audit-Id: f8ca1d6a-c13c-4b3b-a1f7-350b2359cf4a
	I0407 14:03:20.523548    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:20.523548    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:20.523548    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:20.523774    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:21.018981    9720 type.go:168] "Request Body" body=""
	I0407 14:03:21.018981    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:21.018981    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:21.018981    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:21.018981    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:21.022995    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:21.022995    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:21.022995    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:21.022995    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:21 GMT
	I0407 14:03:21.022995    9720 round_trippers.go:587]     Audit-Id: b95eddfd-4615-4845-932f-11e7cba2ec74
	I0407 14:03:21.022995    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:21.022995    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:21.022995    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:21.022995    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:21.022995    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:21.519898    9720 type.go:168] "Request Body" body=""
	I0407 14:03:21.519898    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:21.519898    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:21.519898    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:21.519898    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:21.523751    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:21.523751    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:21.523813    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:21 GMT
	I0407 14:03:21.523813    9720 round_trippers.go:587]     Audit-Id: 7913012b-6841-404f-ac35-e386303df71f
	I0407 14:03:21.523813    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:21.523813    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:21.523853    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:21.523853    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:21.523853    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:21.524148    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:21.524277    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:22.019304    9720 type.go:168] "Request Body" body=""
	I0407 14:03:22.019304    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:22.019304    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:22.019304    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:22.019304    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:22.042477    9720 round_trippers.go:581] Response Status: 200 OK in 23 milliseconds
	I0407 14:03:22.042477    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:22.042477    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:22.042477    9720 round_trippers.go:587]     Content-Length: 2788
	I0407 14:03:22.042567    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:22 GMT
	I0407 14:03:22.042567    9720 round_trippers.go:587]     Audit-Id: 33e12052-dbed-464e-a634-7cc8a3ddb666
	I0407 14:03:22.042567    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:22.042567    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:22.042567    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:22.042854    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 cd 15 0a aa 0c 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 31 36 38 00 42  |f2f300172.6168.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 12789 chars]
	 >
	I0407 14:03:22.520007    9720 type.go:168] "Request Body" body=""
	I0407 14:03:22.520101    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:22.520101    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:22.520101    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:22.520196    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:22.523295    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:22.523388    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:22.523388    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:22.523388    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:22.523424    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:22.523424    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:22 GMT
	I0407 14:03:22.523424    9720 round_trippers.go:587]     Audit-Id: 030c7f4f-fd04-49fe-ae9a-d7956ea05b4f
	I0407 14:03:22.523424    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:22.523424    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:22.523689    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:23.019255    9720 type.go:168] "Request Body" body=""
	I0407 14:03:23.019286    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:23.019286    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:23.019407    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:23.019407    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:23.022752    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:23.022752    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:23.022752    9720 round_trippers.go:587]     Audit-Id: 1d7836f1-3230-4021-a435-ad0c777cb485
	I0407 14:03:23.022752    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:23.022752    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:23.022752    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:23.022752    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:23.022752    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:23.022752    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:23 GMT
	I0407 14:03:23.022752    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:23.519262    9720 type.go:168] "Request Body" body=""
	I0407 14:03:23.519262    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:23.519262    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:23.519262    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:23.519262    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:23.523042    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:23.523042    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:23.523042    9720 round_trippers.go:587]     Audit-Id: 45df15cb-f7d6-43dd-b2ab-f6445e581e17
	I0407 14:03:23.523042    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:23.523042    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:23.523042    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:23.523042    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:23.523042    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:23.523042    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:23 GMT
	I0407 14:03:23.523598    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:24.026528    9720 type.go:168] "Request Body" body=""
	I0407 14:03:24.026528    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:24.026528    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:24.026528    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:24.026528    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:24.030529    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:24.030529    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:24.030529    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:24 GMT
	I0407 14:03:24.030529    9720 round_trippers.go:587]     Audit-Id: 01f98a13-19a2-43c7-b904-de84e6143aa0
	I0407 14:03:24.030529    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:24.030529    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:24.030529    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:24.030529    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:24.030529    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:24.030529    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:24.030529    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:24.519791    9720 type.go:168] "Request Body" body=""
	I0407 14:03:24.519791    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:24.519791    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:24.519791    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:24.519791    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:24.524317    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:24.524317    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:24.524317    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:24.524317    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:24 GMT
	I0407 14:03:24.524317    9720 round_trippers.go:587]     Audit-Id: 76e75c49-4a6d-4b5c-ae88-e47c1ab39e96
	I0407 14:03:24.524317    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:24.524317    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:24.524317    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:24.524317    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:24.524317    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:25.019515    9720 type.go:168] "Request Body" body=""
	I0407 14:03:25.019515    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:25.019515    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:25.019515    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:25.019515    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:25.023886    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:25.023960    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:25.023960    9720 round_trippers.go:587]     Audit-Id: 027bd2f6-dafe-43d6-b4be-af7cd1c38e8a
	I0407 14:03:25.023960    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:25.023960    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:25.023960    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:25.023960    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:25.023960    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:25.023960    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:25 GMT
	I0407 14:03:25.024263    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:25.519497    9720 type.go:168] "Request Body" body=""
	I0407 14:03:25.519497    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:25.519497    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:25.519497    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:25.519497    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:25.523839    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:25.523839    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:25.523902    9720 round_trippers.go:587]     Audit-Id: 2d7e25e2-5149-492a-a0cb-4c0109953dfc
	I0407 14:03:25.523902    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:25.523902    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:25.523902    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:25.523902    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:25.523902    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:25.523902    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:25 GMT
	I0407 14:03:25.523902    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:26.019531    9720 type.go:168] "Request Body" body=""
	I0407 14:03:26.019531    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:26.019531    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:26.019531    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:26.019531    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:26.023751    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:26.023901    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:26.023901    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:26 GMT
	I0407 14:03:26.023901    9720 round_trippers.go:587]     Audit-Id: f3b52ea2-d5cb-423a-8cac-c0171ba3823a
	I0407 14:03:26.023901    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:26.023948    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:26.023948    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:26.023948    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:26.023948    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:26.024125    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:26.520113    9720 type.go:168] "Request Body" body=""
	I0407 14:03:26.520113    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:26.520113    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:26.520113    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:26.520113    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:26.523957    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:26.523957    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:26.523957    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:26.523957    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:26.523957    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:26.523957    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:26 GMT
	I0407 14:03:26.523957    9720 round_trippers.go:587]     Audit-Id: 0045e9df-63ce-4deb-8345-87264acfc1a8
	I0407 14:03:26.523957    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:26.523957    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:26.523957    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:26.524600    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:27.019849    9720 type.go:168] "Request Body" body=""
	I0407 14:03:27.019849    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:27.019849    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:27.019849    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:27.019849    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:27.088368    9720 round_trippers.go:581] Response Status: 200 OK in 68 milliseconds
	I0407 14:03:27.088540    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:27.088540    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:27 GMT
	I0407 14:03:27.088540    9720 round_trippers.go:587]     Audit-Id: 8bd6aa93-67cf-4d27-8e55-d5577bad8b4d
	I0407 14:03:27.088622    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:27.088622    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:27.088622    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:27.088622    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:27.088622    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:27.088980    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:27.519098    9720 type.go:168] "Request Body" body=""
	I0407 14:03:27.519098    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:27.519098    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:27.519098    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:27.519098    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:27.522505    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:27.522505    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:27.522505    9720 round_trippers.go:587]     Audit-Id: 80cae4e2-1a48-416b-9a4e-4644bdbc631d
	I0407 14:03:27.522505    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:27.522505    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:27.522505    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:27.522505    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:27.522505    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:27.522505    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:27 GMT
	I0407 14:03:27.522505    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:28.018830    9720 type.go:168] "Request Body" body=""
	I0407 14:03:28.018830    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:28.018830    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:28.018830    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:28.018830    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:28.050324    9720 round_trippers.go:581] Response Status: 200 OK in 31 milliseconds
	I0407 14:03:28.050441    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:28.050441    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:28.050441    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:28.050441    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:28.050441    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:28.050441    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:28 GMT
	I0407 14:03:28.050441    9720 round_trippers.go:587]     Audit-Id: cfe88d7f-7979-494d-bf8c-f857d8bb1d9c
	I0407 14:03:28.050566    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:28.050607    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:28.519351    9720 type.go:168] "Request Body" body=""
	I0407 14:03:28.519351    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:28.519351    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:28.519351    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:28.519351    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:28.521917    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:03:28.521917    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:28.521917    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:28.521917    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:28 GMT
	I0407 14:03:28.521917    9720 round_trippers.go:587]     Audit-Id: f2aff221-8780-4e4a-afa0-d9f28b79fe97
	I0407 14:03:28.521917    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:28.521917    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:28.521917    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:28.521917    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:28.522636    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:29.019175    9720 type.go:168] "Request Body" body=""
	I0407 14:03:29.019175    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:29.019175    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:29.019175    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:29.019175    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:29.022921    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:29.022921    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:29.022921    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:29 GMT
	I0407 14:03:29.022921    9720 round_trippers.go:587]     Audit-Id: 7f608783-5c65-4939-ab06-2141aefb10c1
	I0407 14:03:29.022921    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:29.022921    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:29.022921    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:29.022921    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:29.022921    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:29.022921    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:29.022921    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:29.519559    9720 type.go:168] "Request Body" body=""
	I0407 14:03:29.520360    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:29.520360    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:29.520360    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:29.520360    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:29.524464    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:29.524464    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:29.524464    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:29 GMT
	I0407 14:03:29.524464    9720 round_trippers.go:587]     Audit-Id: 3c8ba033-7a22-4a5f-9e63-f012b1e6bc31
	I0407 14:03:29.524464    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:29.524464    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:29.524464    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:29.524464    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:29.524464    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:29.524464    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:30.020009    9720 type.go:168] "Request Body" body=""
	I0407 14:03:30.020185    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:30.020185    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:30.020185    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:30.020260    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:30.024177    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:30.024177    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:30.024177    9720 round_trippers.go:587]     Audit-Id: 3608474d-6783-43f6-ad0b-705adf0ad540
	I0407 14:03:30.024177    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:30.024276    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:30.024276    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:30.024276    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:30.024276    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:30.024276    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:30 GMT
	I0407 14:03:30.024573    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:30.519399    9720 type.go:168] "Request Body" body=""
	I0407 14:03:30.519493    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:30.519581    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:30.519581    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:30.519581    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:30.522682    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:30.522682    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:30.522760    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:30 GMT
	I0407 14:03:30.522760    9720 round_trippers.go:587]     Audit-Id: 6cd4407e-c25d-4bd8-a0ce-586bf752f742
	I0407 14:03:30.522760    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:30.522760    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:30.522760    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:30.522760    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:30.522760    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:30.522912    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:31.019529    9720 type.go:168] "Request Body" body=""
	I0407 14:03:31.020021    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:31.020081    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:31.020081    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:31.020081    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:31.025418    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:03:31.025418    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:31.025418    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:31.025418    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:31.025418    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:31.025418    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:31 GMT
	I0407 14:03:31.025418    9720 round_trippers.go:587]     Audit-Id: 81ce39bb-84a0-49e0-8a57-c365cfc67795
	I0407 14:03:31.025418    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:31.025418    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:31.025671    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:31.025857    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:31.518957    9720 type.go:168] "Request Body" body=""
	I0407 14:03:31.519258    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:31.519258    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:31.519258    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:31.519372    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:31.522454    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:31.522454    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:31.522573    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:31.522573    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:31.522573    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:31 GMT
	I0407 14:03:31.522573    9720 round_trippers.go:587]     Audit-Id: 24a98072-6dda-4f14-bc5c-b55acb204c53
	I0407 14:03:31.522573    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:31.522573    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:31.522573    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:31.522646    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:32.019853    9720 type.go:168] "Request Body" body=""
	I0407 14:03:32.020291    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:32.020291    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:32.020291    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:32.020291    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:32.023800    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:32.023800    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:32.023800    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:32 GMT
	I0407 14:03:32.023880    9720 round_trippers.go:587]     Audit-Id: b9c20ca6-eefd-4ff6-b272-d82dc42d1401
	I0407 14:03:32.023880    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:32.023880    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:32.023880    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:32.023880    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:32.023880    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:32.024094    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:32.519759    9720 type.go:168] "Request Body" body=""
	I0407 14:03:32.519759    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:32.519759    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:32.519759    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:32.519759    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:32.523009    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:32.523009    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:32.523090    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:32 GMT
	I0407 14:03:32.523090    9720 round_trippers.go:587]     Audit-Id: 9405e843-2bb1-42b1-9375-ec7176a1ea07
	I0407 14:03:32.523090    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:32.523090    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:32.523090    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:32.523090    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:32.523090    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:32.523442    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:33.018917    9720 type.go:168] "Request Body" body=""
	I0407 14:03:33.018917    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:33.018917    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:33.018917    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:33.018917    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:33.023479    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:33.023479    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:33.023479    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:33.023479    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:33 GMT
	I0407 14:03:33.023479    9720 round_trippers.go:587]     Audit-Id: 3c58aab5-4e95-4614-a24f-88f3a4e1eb1c
	I0407 14:03:33.023479    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:33.023479    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:33.023614    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:33.023614    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:33.023928    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:33.519208    9720 type.go:168] "Request Body" body=""
	I0407 14:03:33.519208    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:33.519208    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:33.519208    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:33.519208    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:33.526918    9720 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:03:33.526918    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:33.526918    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:33 GMT
	I0407 14:03:33.527106    9720 round_trippers.go:587]     Audit-Id: a982e6a5-dc0a-4bb0-9555-1a25ae1a6ab4
	I0407 14:03:33.527106    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:33.527106    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:33.527106    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:33.527106    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:33.527106    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:33.527282    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:33.527282    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:34.019215    9720 type.go:168] "Request Body" body=""
	I0407 14:03:34.019215    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:34.019215    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:34.019215    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:34.019215    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:34.023581    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:34.023610    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:34.023651    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:34.023651    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:34 GMT
	I0407 14:03:34.023651    9720 round_trippers.go:587]     Audit-Id: 1d3822d8-4a9c-4e62-9c6f-207c29ec5ac7
	I0407 14:03:34.023680    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:34.023680    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:34.023680    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:34.023680    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:34.023977    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:34.518871    9720 type.go:168] "Request Body" body=""
	I0407 14:03:34.518871    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:34.518871    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:34.518871    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:34.518871    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:34.522344    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:34.522344    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:34.522344    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:34.522472    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:34.522472    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:34.522472    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:34.522472    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:34.522472    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:34 GMT
	I0407 14:03:34.522472    9720 round_trippers.go:587]     Audit-Id: 9a882de7-9e8b-4032-b3e7-c0d8a7ba99fb
	I0407 14:03:34.522681    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:35.020114    9720 type.go:168] "Request Body" body=""
	I0407 14:03:35.020339    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:35.020339    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:35.020393    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:35.020393    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:35.025281    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:35.025281    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:35.025281    9720 round_trippers.go:587]     Audit-Id: d2a36f29-04d4-4530-85cc-e09c67c7ca59
	I0407 14:03:35.025281    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:35.025281    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:35.025281    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:35.025281    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:35.025281    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:35.025281    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:35 GMT
	I0407 14:03:35.026298    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:35.519320    9720 type.go:168] "Request Body" body=""
	I0407 14:03:35.519320    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:35.519320    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:35.519320    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:35.519320    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:35.525710    9720 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:03:35.525710    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:35.525763    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:35.525763    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:35 GMT
	I0407 14:03:35.525763    9720 round_trippers.go:587]     Audit-Id: 0596b855-06d1-4938-99bf-1d7e4ddf3e47
	I0407 14:03:35.525763    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:35.525763    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:35.525763    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:35.525763    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:35.525763    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:36.019359    9720 type.go:168] "Request Body" body=""
	I0407 14:03:36.019359    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:36.019359    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:36.019359    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:36.019359    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:36.023911    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:36.024006    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:36.024006    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:36.024006    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:36 GMT
	I0407 14:03:36.024006    9720 round_trippers.go:587]     Audit-Id: d51b04da-071a-4e26-b87b-dbe236431f75
	I0407 14:03:36.024006    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:36.024006    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:36.024006    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:36.024006    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:36.024303    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:36.024465    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:36.520321    9720 type.go:168] "Request Body" body=""
	I0407 14:03:36.520321    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:36.520590    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:36.520648    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:36.520648    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:36.524258    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:36.524355    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:36.524355    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:36.524355    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:36.524355    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:36.524355    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:36.524355    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:36.524355    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:36 GMT
	I0407 14:03:36.524355    9720 round_trippers.go:587]     Audit-Id: b3674fe1-fb0e-411e-b95d-9cae0a3990d0
	I0407 14:03:36.524614    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:37.019774    9720 type.go:168] "Request Body" body=""
	I0407 14:03:37.019846    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:37.019846    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:37.019943    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:37.019943    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:37.023763    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:37.023856    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:37.023856    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:37.023856    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:37.023856    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:37.023856    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:37 GMT
	I0407 14:03:37.023856    9720 round_trippers.go:587]     Audit-Id: ff0b8aa8-c938-4050-96d3-8d54ed1b2d1f
	I0407 14:03:37.023942    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:37.023942    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:37.024094    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:37.519589    9720 type.go:168] "Request Body" body=""
	I0407 14:03:37.520101    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:37.520101    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:37.520101    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:37.520101    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:37.526411    9720 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:03:37.526518    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:37.526518    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:37 GMT
	I0407 14:03:37.526518    9720 round_trippers.go:587]     Audit-Id: 61dc3fb9-5def-472d-bb8f-4c3972f784f7
	I0407 14:03:37.526518    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:37.526628    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:37.526628    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:37.526628    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:37.526651    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:37.526678    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:38.019401    9720 type.go:168] "Request Body" body=""
	I0407 14:03:38.019963    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:38.019963    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:38.019963    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:38.019963    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:38.023859    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:38.023859    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:38.023859    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:38.023859    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:38.023859    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:38 GMT
	I0407 14:03:38.023859    9720 round_trippers.go:587]     Audit-Id: ea71dd1f-9fea-402b-b96e-798dea3dea45
	I0407 14:03:38.024004    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:38.024004    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:38.024004    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:38.024004    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:38.519495    9720 type.go:168] "Request Body" body=""
	I0407 14:03:38.520120    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:38.520120    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:38.520120    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:38.520120    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:38.526204    9720 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:03:38.526204    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:38.526204    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:38.526340    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:38.526340    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:38.526340    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:38.526340    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:38 GMT
	I0407 14:03:38.526505    9720 round_trippers.go:587]     Audit-Id: 27122bee-55b9-4769-a4be-f64bc95de7c8
	I0407 14:03:38.526546    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:38.526848    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:38.526848    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:39.018904    9720 type.go:168] "Request Body" body=""
	I0407 14:03:39.018904    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:39.018904    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:39.018904    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:39.018904    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:39.023323    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:39.023323    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:39.023395    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:39.023395    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:39.023395    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:39 GMT
	I0407 14:03:39.023395    9720 round_trippers.go:587]     Audit-Id: a1119301-a8d0-4941-b6ea-fb342966478c
	I0407 14:03:39.023395    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:39.023395    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:39.023395    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:39.023480    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:39.518978    9720 type.go:168] "Request Body" body=""
	I0407 14:03:39.518978    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:39.518978    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:39.518978    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:39.518978    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:39.522877    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:39.522955    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:39.522955    9720 round_trippers.go:587]     Audit-Id: 40dfa382-cf87-42ce-8f2a-143cae1c6ad9
	I0407 14:03:39.522955    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:39.522955    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:39.522955    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:39.522955    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:39.522955    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:39.522955    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:39 GMT
	I0407 14:03:39.523212    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:40.018921    9720 type.go:168] "Request Body" body=""
	I0407 14:03:40.019403    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:40.019403    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:40.019403    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:40.019403    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:40.023139    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:40.023139    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:40.023139    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:40.023139    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:40.023139    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:40.023326    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:40 GMT
	I0407 14:03:40.023326    9720 round_trippers.go:587]     Audit-Id: 2119c3fa-437f-44b8-aea9-5e09b5d5ed79
	I0407 14:03:40.023326    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:40.023326    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:40.023693    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:40.518794    9720 type.go:168] "Request Body" body=""
	I0407 14:03:40.518794    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:40.518794    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:40.518794    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:40.518794    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:40.524684    9720 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:03:40.524684    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:40.524684    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:40 GMT
	I0407 14:03:40.524684    9720 round_trippers.go:587]     Audit-Id: fecd990a-ae3c-43ca-8710-65215de32395
	I0407 14:03:40.524684    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:40.524684    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:40.524684    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:40.524684    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:40.524684    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:40.525105    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:41.019396    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.020933    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:41.020993    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.020993    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.020993    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.025778    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:41.026309    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.026309    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.026309    9720 round_trippers.go:587]     Audit-Id: a0373169-eeae-4086-801f-2bb66b3f825c
	I0407 14:03:41.026309    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.026309    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.026309    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.026309    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.026309    9720 round_trippers.go:587]     Content-Length: 3089
	I0407 14:03:41.026542    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 fa 17 0a f9 0e 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 32 35 38 00 42  |f2f300172.6258.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 14287 chars]
	 >
	I0407 14:03:41.026667    9720 node_ready.go:53] node "multinode-140200-m02" has status "Ready":"False"
	I0407 14:03:41.520148    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.520262    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:41.520262    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.520262    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.520262    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.523635    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:41.523710    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.523710    9720 round_trippers.go:587]     Audit-Id: cd37358b-35b0-4926-a5e9-79152fc7b30e
	I0407 14:03:41.523710    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.523784    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.523784    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.523784    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.523784    9720 round_trippers.go:587]     Content-Length: 2967
	I0407 14:03:41.523805    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.523832    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 80 17 0a af 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 35 36 38 00 42  |f2f300172.6568.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 13661 chars]
	 >
	I0407 14:03:41.523832    9720 node_ready.go:49] node "multinode-140200-m02" has status "Ready":"True"
	I0407 14:03:41.523832    9720 node_ready.go:38] duration metric: took 28.5051423s for node "multinode-140200-m02" to be "Ready" ...
	I0407 14:03:41.523832    9720 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:03:41.523832    9720 type.go:204] "Request Body" body=""
	I0407 14:03:41.524398    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods
	I0407 14:03:41.524398    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.524398    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.524398    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.529190    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:41.529217    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.529217    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.529217    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.529217    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.529217    9720 round_trippers.go:587]     Audit-Id: 63339e8a-8223-4eaf-9b47-dfda6bf4c3f8
	I0407 14:03:41.529217    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.529269    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.532482    9720 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 89 93 03 0a  09 0a 00 12 03 36 35 36  |ist..........656|
		00000020  1a 00 12 ce 27 0a ae 19  0a 18 63 6f 72 65 64 6e  |....'.....coredn|
		00000030  73 2d 36 36 38 64 36 62  66 39 62 63 2d 35 66 70  |s-668d6bf9bc-5fp|
		00000040  34 66 12 13 63 6f 72 65  64 6e 73 2d 36 36 38 64  |4f..coredns-668d|
		00000050  36 62 66 39 62 63 2d 1a  0b 6b 75 62 65 2d 73 79  |6bf9bc-..kube-sy|
		00000060  73 74 65 6d 22 00 2a 24  34 33 37 32 32 36 61 65  |stem".*$437226ae|
		00000070  2d 65 36 33 64 2d 34 32  34 35 2d 62 62 65 61 2d  |-e63d-4245-bbea-|
		00000080  61 64 35 63 34 31 66 66  39 61 39 33 32 03 34 34  |ad5c41ff9a932.44|
		00000090  38 38 00 42 08 08 e6 b4  cf bf 06 10 00 5a 13 0a  |88.B.........Z..|
		000000a0  07 6b 38 73 2d 61 70 70  12 08 6b 75 62 65 2d 64  |.k8s-app..kube-d|
		000000b0  6e 73 5a 1f 0a 11 70 6f  64 2d 74 65 6d 70 6c 61  |nsZ...pod-templa|
		000000c0  74 65 2d 68 61 73 68 12  0a 36 36 38 64 36 62 66  |te-hash..668d6b [truncated 253897 chars]
	 >
	I0407 14:03:41.533098    9720 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.533335    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.533335    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:03:41.533335    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.533335    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.533335    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.536394    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:41.536394    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.536394    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.536394    9720 round_trippers.go:587]     Audit-Id: 8c3345ac-aec4-42a0-9dba-33c8db423d79
	I0407 14:03:41.536394    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.536394    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.536394    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.536394    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.537436    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  ce 27 0a ae 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.'.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 03 34 34 38 38 00  |c41ff9a932.4488.|
		00000080  42 08 08 e6 b4 cf bf 06  10 00 5a 13 0a 07 6b 38  |B.........Z...k8|
		00000090  73 2d 61 70 70 12 08 6b  75 62 65 2d 64 6e 73 5a  |s-app..kube-dnsZ|
		000000a0  1f 0a 11 70 6f 64 2d 74  65 6d 70 6c 61 74 65 2d  |...pod-template-|
		000000b0  68 61 73 68 12 0a 36 36  38 64 36 62 66 39 62 63  |hash..668d6bf9bc|
		000000c0  6a 53 0a 0a 52 65 70 6c  69 63 61 53 65 74 1a 12  |jS..ReplicaSet. [truncated 24165 chars]
	 >
	I0407 14:03:41.537436    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.537436    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:03:41.537436    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.537436    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.537436    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.540816    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:41.540816    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.540816    9720 round_trippers.go:587]     Audit-Id: 50a8b5bc-fe68-468f-a844-8a04219c5445
	I0407 14:03:41.540816    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.540816    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.540816    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.540816    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.540816    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.541824    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 35  37 38 00 42 08 08 dd b4  |d4df2.4578.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21014 chars]
	 >
	I0407 14:03:41.541824    9720 pod_ready.go:93] pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace has status "Ready":"True"
	I0407 14:03:41.541824    9720 pod_ready.go:82] duration metric: took 8.549ms for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.541824    9720 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.541824    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.541824    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-140200
	I0407 14:03:41.541824    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.541824    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.541824    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.544078    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:03:41.544078    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.544078    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.544078    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.544078    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.544078    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.544078    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.544078    9720 round_trippers.go:587]     Audit-Id: 4abb9fae-dd74-47e9-9daf-b322cefffbb2
	I0407 14:03:41.545091    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  95 2b 0a 9a 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 31 34  30 32 30 30 12 00 1a 0b  |inode-140200....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 62  |kube-system".*$b|
		00000040  33 34 35 63 35 66 32 2d  65 36 63 30 2d 34 65 66  |345c5f2-e6c0-4ef|
		00000050  62 2d 39 31 30 65 2d 62  32 38 39 36 36 66 65 30  |b-910e-b28966fe0|
		00000060  33 32 64 32 03 34 30 37  38 00 42 08 08 e0 b4 cf  |32d2.4078.B.....|
		00000070  bf 06 10 00 5a 11 0a 09  63 6f 6d 70 6f 6e 65 6e  |....Z...componen|
		00000080  74 12 04 65 74 63 64 5a  15 0a 04 74 69 65 72 12  |t..etcdZ...tier.|
		00000090  0d 63 6f 6e 74 72 6f 6c  2d 70 6c 61 6e 65 62 4d  |.control-planebM|
		000000a0  0a 30 6b 75 62 65 61 64  6d 2e 6b 75 62 65 72 6e  |.0kubeadm.kubern|
		000000b0  65 74 65 73 2e 69 6f 2f  65 74 63 64 2e 61 64 76  |etes.io/etcd.adv|
		000000c0  65 72 74 69 73 65 2d 63  6c 69 65 6e 74 2d 75 72  |ertise-client-u [truncated 26384 chars]
	 >
	I0407 14:03:41.545091    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.545091    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:03:41.545091    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.545091    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.545091    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.548146    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:41.548146    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.548215    9720 round_trippers.go:587]     Audit-Id: 0fa20f22-404d-4f83-adb4-0c2c46c204ea
	I0407 14:03:41.548215    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.548215    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.548215    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.548215    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.548215    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.548679    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 35  37 38 00 42 08 08 dd b4  |d4df2.4578.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21014 chars]
	 >
	I0407 14:03:41.548679    9720 pod_ready.go:93] pod "etcd-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:03:41.548679    9720 pod_ready.go:82] duration metric: took 6.8551ms for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.548679    9720 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.548679    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.548679    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-140200
	I0407 14:03:41.548679    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.548679    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.548679    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.551903    9720 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:03:41.551903    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.551903    9720 round_trippers.go:587]     Audit-Id: 37abe7c8-c810-46b1-960c-3b8cd0e522bc
	I0407 14:03:41.551903    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.551903    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.551903    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.551903    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.551903    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.552335    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  fb 33 0a aa 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.3.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 30 66 37 38 32 39 30  |ystem".*$0f78290|
		00000050  36 2d 39 38 63 32 2d 34  64 65 64 2d 39 61 39 38  |6-98c2-4ded-9a98|
		00000060  2d 62 63 37 62 31 34 33  35 30 62 30 36 32 03 33  |-bc7b14350b062.3|
		00000070  35 33 38 00 42 08 08 e0  b4 cf bf 06 10 00 5a 1b  |538.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 61 70 69 73 65 72  76 65 72 5a 15 0a 04 74  |e-apiserverZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 54 0a 3f 6b 75  62 65 61 64 6d 2e 6b 75  |nebT.?kubeadm.ku|
		000000c0  62 65 72 6e 65 74 65 73  2e 69 6f 2f 6b 75 62 65  |bernetes.io/kub [truncated 31983 chars]
	 >
	I0407 14:03:41.552425    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.552425    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:03:41.552425    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.552425    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.552425    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.554655    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:03:41.555180    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.555180    9720 round_trippers.go:587]     Audit-Id: 4791c877-c16f-455b-873b-d1e1939a01f7
	I0407 14:03:41.555180    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.555180    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.555180    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.555180    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.555262    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.555923    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 35  37 38 00 42 08 08 dd b4  |d4df2.4578.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21014 chars]
	 >
	I0407 14:03:41.556107    9720 pod_ready.go:93] pod "kube-apiserver-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:03:41.556157    9720 pod_ready.go:82] duration metric: took 7.428ms for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.556157    9720 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.556243    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.556284    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-140200
	I0407 14:03:41.556284    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.556284    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.556284    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.558553    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:03:41.559045    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.559045    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.559045    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.559045    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.559045    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.559045    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.559045    9720 round_trippers.go:587]     Audit-Id: b4e21669-bf44-4bc2-b822-b387f6c61226
	I0407 14:03:41.559217    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  e6 30 0a 98 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.0....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 31 34 30 32 30 30 12  |ultinode-140200.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 61 37 63 36 65 33  62 62 2d 31 39 37 63 2d  |*$a7c6e3bb-197c-|
		00000060  34 33 34 65 2d 39 66 31  39 2d 37 34 64 37 65 34  |434e-9f19-74d7e4|
		00000070  38 62 35 30 64 65 32 03  34 30 31 38 00 42 08 08  |8b50de2.4018.B..|
		00000080  e0 b4 cf bf 06 10 00 5a  24 0a 09 63 6f 6d 70 6f  |.......Z$..compo|
		00000090  6e 65 6e 74 12 17 6b 75  62 65 2d 63 6f 6e 74 72  |nent..kube-contr|
		000000a0  6f 6c 6c 65 72 2d 6d 61  6e 61 67 65 72 5a 15 0a  |oller-managerZ..|
		000000b0  04 74 69 65 72 12 0d 63  6f 6e 74 72 6f 6c 2d 70  |.tier..control-p|
		000000c0  6c 61 6e 65 62 3d 0a 19  6b 75 62 65 72 6e 65 74  |laneb=..kuberne [truncated 29940 chars]
	 >
	I0407 14:03:41.559676    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.559676    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:03:41.559676    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.559676    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.559676    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.562869    9720 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:03:41.562869    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.562869    9720 round_trippers.go:587]     Audit-Id: c9d21dfa-93f3-42e0-a085-a8e9d8bddbef
	I0407 14:03:41.562869    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.562869    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.562869    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.562869    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.562979    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.563388    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 35  37 38 00 42 08 08 dd b4  |d4df2.4578.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21014 chars]
	 >
	I0407 14:03:41.563475    9720 pod_ready.go:93] pod "kube-controller-manager-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:03:41.563580    9720 pod_ready.go:82] duration metric: took 7.3179ms for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.563580    9720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.563619    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.720407    9720 request.go:661] Waited for 156.786ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:03:41.720407    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:03:41.720407    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.720407    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.720407    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.726702    9720 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:03:41.726804    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.726804    9720 round_trippers.go:587]     Audit-Id: a2fd8e5b-17a1-497d-a4e8-cc084f13f8fe
	I0407 14:03:41.726804    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.726804    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.726804    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.726804    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.726804    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.727430    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a0 25 0a be 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 32 72 37 6c 6a 12  0b 6b 75 62 65 2d 70 72  |y-2r7lj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 34 38 39  32 64 37 30 33 2d 66 63  |m".*$4892d703-fc|
		00000050  34 33 2d 34 66 36 37 2d  38 34 39 33 2d 65 61 65  |43-4f67-8493-eae|
		00000060  61 65 38 63 35 65 37 36  35 32 03 36 33 32 38 00  |ae8c5e7652.6328.|
		00000070  42 08 08 a0 b6 cf bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22666 chars]
	 >
	I0407 14:03:41.727430    9720 type.go:168] "Request Body" body=""
	I0407 14:03:41.921187    9720 request.go:661] Waited for 193.7559ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:41.921187    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:03:41.921187    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:41.921187    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:41.921187    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:41.925584    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:41.925923    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:41.925923    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:41.925923    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:41.925923    9720 round_trippers.go:587]     Content-Length: 2967
	I0407 14:03:41.925923    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:41 GMT
	I0407 14:03:41.925923    9720 round_trippers.go:587]     Audit-Id: e274779a-3f9b-48ed-aad8-4a017f9e5438
	I0407 14:03:41.925923    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:41.925923    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:41.926249    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 80 17 0a af 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 03 36 35 36 38 00 42  |f2f300172.6568.B|
		00000060  08 08 a0 b6 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000070  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000080  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		00000090  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000a0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000b0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		000000c0  68 12 05 61 6d 64 36 34  5a 2e 0a 16 6b 75 62 65  |h..amd64Z...kub [truncated 13661 chars]
	 >
	I0407 14:03:41.926466    9720 pod_ready.go:93] pod "kube-proxy-2r7lj" in "kube-system" namespace has status "Ready":"True"
	I0407 14:03:41.926466    9720 pod_ready.go:82] duration metric: took 362.8829ms for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.926466    9720 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:41.926466    9720 type.go:168] "Request Body" body=""
	I0407 14:03:42.120498    9720 request.go:661] Waited for 194.0309ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:03:42.120498    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:03:42.120498    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:42.120498    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:42.120498    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:42.125460    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:42.125545    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:42.125569    9720 round_trippers.go:587]     Audit-Id: a2237c31-2046-40df-b6ff-c90cc431ec8c
	I0407 14:03:42.125569    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:42.125598    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:42.125598    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:42.125598    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:42.125598    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:42 GMT
	I0407 14:03:42.126310    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  98 25 0a be 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 39 72 78 32 64 12  0b 6b 75 62 65 2d 70 72  |y-9rx2d..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 32 65 61  61 62 32 35 64 2d 66 65  |m".*$2eaab25d-fe|
		00000050  30 62 2d 34 63 34 38 2d  61 63 36 62 2d 34 32 30  |0b-4c48-ac6b-420|
		00000060  39 35 66 35 66 62 63 65  36 32 03 33 39 39 38 00  |95f5fbce62.3998.|
		00000070  42 08 08 e5 b4 cf bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22595 chars]
	 >
	I0407 14:03:42.126310    9720 type.go:168] "Request Body" body=""
	I0407 14:03:42.321439    9720 request.go:661] Waited for 195.127ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:03:42.321439    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:03:42.321439    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:42.321439    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:42.321439    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:42.326381    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:42.326381    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:42.326381    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:42.326381    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:42.326381    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:42.326381    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:42 GMT
	I0407 14:03:42.326381    9720 round_trippers.go:587]     Audit-Id: 9a204230-510e-4232-a9c1-c008874ae608
	I0407 14:03:42.326381    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:42.326827    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 35  37 38 00 42 08 08 dd b4  |d4df2.4578.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21014 chars]
	 >
	I0407 14:03:42.327049    9720 pod_ready.go:93] pod "kube-proxy-9rx2d" in "kube-system" namespace has status "Ready":"True"
	I0407 14:03:42.327049    9720 pod_ready.go:82] duration metric: took 400.5803ms for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:42.327100    9720 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:42.327235    9720 type.go:168] "Request Body" body=""
	I0407 14:03:42.520903    9720 request.go:661] Waited for 193.667ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:03:42.521461    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:03:42.521461    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:42.521461    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:42.521461    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:42.530661    9720 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 14:03:42.530661    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:42.530799    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:42.530799    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:42.530799    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:42.530799    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:42.530799    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:42 GMT
	I0407 14:03:42.530799    9720 round_trippers.go:587]     Audit-Id: ae710db9-c4f4-437a-b7a4-1f5d7f6becbb
	I0407 14:03:42.531132    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  f1 22 0a 80 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.".....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 38 38 64 66 65 65 65  |ystem".*$88dfeee|
		00000050  38 2d 61 33 63 31 2d 34  38 35 62 2d 61 62 66 65  |8-a3c1-485b-abfe|
		00000060  2d 39 65 61 66 30 30 35  37 64 36 63 66 32 03 34  |-9eaf0057d6cf2.4|
		00000070  30 35 38 00 42 08 08 e0  b4 cf bf 06 10 00 5a 1b  |058.B.........Z.|
		00000080  0a 09 63 6f 6d 70 6f 6e  65 6e 74 12 0e 6b 75 62  |..component..kub|
		00000090  65 2d 73 63 68 65 64 75  6c 65 72 5a 15 0a 04 74  |e-schedulerZ...t|
		000000a0  69 65 72 12 0d 63 6f 6e  74 72 6f 6c 2d 70 6c 61  |ier..control-pla|
		000000b0  6e 65 62 3d 0a 19 6b 75  62 65 72 6e 65 74 65 73  |neb=..kubernetes|
		000000c0  2e 69 6f 2f 63 6f 6e 66  69 67 2e 68 61 73 68 12  |.io/config.hash [truncated 21166 chars]
	 >
	I0407 14:03:42.531456    9720 type.go:168] "Request Body" body=""
	I0407 14:03:42.720380    9720 request.go:661] Waited for 188.8154ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:03:42.720380    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes/multinode-140200
	I0407 14:03:42.720380    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:42.720380    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:42.720380    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:42.727793    9720 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:03:42.727879    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:42.727879    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:42.727879    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:42 GMT
	I0407 14:03:42.727879    9720 round_trippers.go:587]     Audit-Id: fd968dd8-b8b1-4d37-ae9c-0e8a750ab118
	I0407 14:03:42.727879    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:42.727944    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:42.727944    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:42.728090    9720 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d6 22 0a 8a 11 0a 10  6d 75 6c 74 69 6e 6f 64  |..".....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 03 34 35  37 38 00 42 08 08 dd b4  |d4df2.4578.B....|
		00000060  cf bf 06 10 00 5a 20 0a  17 62 65 74 61 2e 6b 75  |.....Z ..beta.ku|
		00000070  62 65 72 6e 65 74 65 73  2e 69 6f 2f 61 72 63 68  |bernetes.io/arch|
		00000080  12 05 61 6d 64 36 34 5a  1e 0a 15 62 65 74 61 2e  |..amd64Z...beta.|
		00000090  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 6f 73  |kubernetes.io/os|
		000000a0  12 05 6c 69 6e 75 78 5a  1b 0a 12 6b 75 62 65 72  |..linuxZ...kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 61 72 63 68 12 05 61  |netes.io/arch..a|
		000000c0  6d 64 36 34 5a 2a 0a 16  6b 75 62 65 72 6e 65 74  |md64Z*..kuberne [truncated 21014 chars]
	 >
	I0407 14:03:42.728090    9720 pod_ready.go:93] pod "kube-scheduler-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:03:42.728090    9720 pod_ready.go:82] duration metric: took 400.9872ms for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:03:42.728090    9720 pod_ready.go:39] duration metric: took 1.2042506s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:03:42.728090    9720 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 14:03:42.741454    9720 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:03:42.770584    9720 system_svc.go:56] duration metric: took 42.4934ms WaitForService to wait for kubelet
	I0407 14:03:42.770712    9720 kubeadm.go:582] duration metric: took 29.9946971s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:03:42.770712    9720 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:03:42.770880    9720 type.go:204] "Request Body" body=""
	I0407 14:03:42.920887    9720 request.go:661] Waited for 149.9776ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.92.89:8443/api/v1/nodes
	I0407 14:03:42.920887    9720 round_trippers.go:470] GET https://172.17.92.89:8443/api/v1/nodes
	I0407 14:03:42.920887    9720 round_trippers.go:476] Request Headers:
	I0407 14:03:42.920887    9720 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:03:42.920887    9720 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:03:42.925017    9720 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:03:42.925017    9720 round_trippers.go:584] Response Headers:
	I0407 14:03:42.925017    9720 round_trippers.go:587]     Audit-Id: f3020f68-f3e6-41c3-8262-d2966e1d5d73
	I0407 14:03:42.925017    9720 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:03:42.925017    9720 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:03:42.925017    9720 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:03:42.925017    9720 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:03:42.925017    9720 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:03:42 GMT
	I0407 14:03:42.926269    9720 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 8d 3d 0a  09 0a 00 12 03 36 35 38  |List..=......658|
		00000020  1a 00 12 d6 22 0a 8a 11  0a 10 6d 75 6c 74 69 6e  |....".....multin|
		00000030  6f 64 65 2d 31 34 30 32  30 30 12 00 1a 00 22 00  |ode-140200....".|
		00000040  2a 24 31 66 35 33 62 34  63 64 2d 61 62 30 31 2d  |*$1f53b4cd-ab01-|
		00000050  34 32 63 61 2d 61 36 61  36 2d 61 39 33 65 66 63  |42ca-a6a6-a93efc|
		00000060  39 62 64 34 64 66 32 03  34 35 37 38 00 42 08 08  |9bd4df2.4578.B..|
		00000070  dd b4 cf bf 06 10 00 5a  20 0a 17 62 65 74 61 2e  |.......Z ..beta.|
		00000080  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		00000090  63 68 12 05 61 6d 64 36  34 5a 1e 0a 15 62 65 74  |ch..amd64Z...bet|
		000000a0  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		000000b0  6f 73 12 05 6c 69 6e 75  78 5a 1b 0a 12 6b 75 62  |os..linuxZ...kub|
		000000c0  65 72 6e 65 74 65 73 2e  69 6f 2f 61 72 63 68 12  |ernetes.io/arch [truncated 37757 chars]
	 >
	I0407 14:03:42.926589    9720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:03:42.926674    9720 node_conditions.go:123] node cpu capacity is 2
	I0407 14:03:42.926674    9720 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:03:42.926674    9720 node_conditions.go:123] node cpu capacity is 2
	I0407 14:03:42.926674    9720 node_conditions.go:105] duration metric: took 155.9612ms to run NodePressure ...
	I0407 14:03:42.926674    9720 start.go:241] waiting for startup goroutines ...
	I0407 14:03:42.926770    9720 start.go:255] writing updated cluster config ...
	I0407 14:03:42.938597    9720 ssh_runner.go:195] Run: rm -f paused
	I0407 14:03:43.093392    9720 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 14:03:43.097032    9720 out.go:177] * Done! kubectl is now configured to use "multinode-140200" cluster and "default" namespace by default
	
	
	==> Docker <==
	Apr 07 14:00:26 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:26.494302808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:00:26 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:26.501280063Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 14:00:26 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:26.501364064Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:00:26 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:26.501376564Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:00:26 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:26.501655966Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:00:26 multinode-140200 cri-dockerd[1347]: time="2025-04-07T14:00:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/f6c740bfe5bb53b81293b8880df29a34ac12df1ec559482969980100e90c1e1f/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 14:00:26 multinode-140200 cri-dockerd[1347]: time="2025-04-07T14:00:26Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/47eb0b16ce1df0a53146941e581feb04ad84d703f6afe8ff2dcf518dd48dbb80/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 14:00:26 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:26.827925960Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 14:00:26 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:26.828155762Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:00:26 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:26.828171162Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:00:26 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:26.828322963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:00:27 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:27.012571194Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 14:00:27 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:27.012772996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:00:27 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:27.012813896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:00:27 multinode-140200 dockerd[1454]: time="2025-04-07T14:00:27.013074999Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:04:08 multinode-140200 dockerd[1454]: time="2025-04-07T14:04:08.245857080Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 14:04:08 multinode-140200 dockerd[1454]: time="2025-04-07T14:04:08.246111582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:04:08 multinode-140200 dockerd[1454]: time="2025-04-07T14:04:08.246157082Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:04:08 multinode-140200 dockerd[1454]: time="2025-04-07T14:04:08.246481784Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:04:08 multinode-140200 cri-dockerd[1347]: time="2025-04-07T14:04:08Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/732344eba89aba06b2b3ca85dad8d58591f92e4fce720352bd4f94e39eb086a9/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 07 14:04:09 multinode-140200 cri-dockerd[1347]: time="2025-04-07T14:04:09Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 07 14:04:10 multinode-140200 dockerd[1454]: time="2025-04-07T14:04:10.094963587Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 14:04:10 multinode-140200 dockerd[1454]: time="2025-04-07T14:04:10.095112388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:04:10 multinode-140200 dockerd[1454]: time="2025-04-07T14:04:10.095129688Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:04:10 multinode-140200 dockerd[1454]: time="2025-04-07T14:04:10.096236299Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	016ef6290457d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   50 seconds ago      Running             busybox                   0                   732344eba89ab       busybox-58667487b6-kt4sh
	b2d29d6fc7748       c69fa2e9cbf5f                                                                                         4 minutes ago       Running             coredns                   0                   47eb0b16ce1df       coredns-668d6bf9bc-5fp4f
	1e0d3f9a0f217       6e38f40d628db                                                                                         4 minutes ago       Running             storage-provisioner       0                   f6c740bfe5bb5       storage-provisioner
	2a1208136f157       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              4 minutes ago       Running             kindnet-cni               0                   0d317e51cbf8d       kindnet-zkw9q
	ec26042b52719       f1332858868e1                                                                                         4 minutes ago       Running             kube-proxy                0                   728d07c29084b       kube-proxy-9rx2d
	8c615c7e05066       b6a454c5a800d                                                                                         5 minutes ago       Running             kube-controller-manager   0                   8bd2f8fc3a28f       kube-controller-manager-multinode-140200
	159f6e03fef6f       d8e673e7c9983                                                                                         5 minutes ago       Running             kube-scheduler            0                   d7cc037737938       kube-scheduler-multinode-140200
	783fd069538d1       a9e7e6b294baf                                                                                         5 minutes ago       Running             etcd                      0                   ad64d975eb393       etcd-multinode-140200
	92c49129b5b09       85b7a174738ba                                                                                         5 minutes ago       Running             kube-apiserver            0                   50c1342f82144       kube-apiserver-multinode-140200
	
	
	==> coredns [b2d29d6fc774] <==
	[INFO] 10.244.0.3:56711 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000163401s
	[INFO] 10.244.1.2:54307 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000143801s
	[INFO] 10.244.1.2:33319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000803s
	[INFO] 10.244.1.2:34592 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170002s
	[INFO] 10.244.1.2:36193 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152002s
	[INFO] 10.244.1.2:57995 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000169302s
	[INFO] 10.244.1.2:52780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165902s
	[INFO] 10.244.1.2:42893 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000260902s
	[INFO] 10.244.1.2:60152 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182002s
	[INFO] 10.244.0.3:48264 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272902s
	[INFO] 10.244.0.3:59185 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149502s
	[INFO] 10.244.0.3:57040 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159402s
	[INFO] 10.244.0.3:52459 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149602s
	[INFO] 10.244.1.2:57811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113901s
	[INFO] 10.244.1.2:40249 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000280403s
	[INFO] 10.244.1.2:34055 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145801s
	[INFO] 10.244.1.2:43241 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161602s
	[INFO] 10.244.0.3:46342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125401s
	[INFO] 10.244.0.3:42268 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.001186212s
	[INFO] 10.244.0.3:33339 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125601s
	[INFO] 10.244.0.3:34226 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000112501s
	[INFO] 10.244.1.2:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279102s
	[INFO] 10.244.1.2:46614 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189202s
	[INFO] 10.244.1.2:52638 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000225402s
	[INFO] 10.244.1.2:40399 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000151102s
	
	
	==> describe nodes <==
	Name:               multinode-140200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-140200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=multinode-140200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T14_00_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:59:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-140200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 14:04:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 14:04:35 +0000   Mon, 07 Apr 2025 13:59:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 14:04:35 +0000   Mon, 07 Apr 2025 13:59:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 14:04:35 +0000   Mon, 07 Apr 2025 13:59:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 14:04:35 +0000   Mon, 07 Apr 2025 14:00:25 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.92.89
	  Hostname:    multinode-140200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 4f06b904591848dea4c118232a5f896c
	  System UUID:                25cd271c-0dd5-b642-826d-3f80486d9e38
	  Boot ID:                    1d5591a5-3a34-4ae4-8a14-2033f9857a9e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-kt4sh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 coredns-668d6bf9bc-5fp4f                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     4m53s
	  kube-system                 etcd-multinode-140200                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         4m59s
	  kube-system                 kindnet-zkw9q                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      4m54s
	  kube-system                 kube-apiserver-multinode-140200             250m (12%)    0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-controller-manager-multinode-140200    200m (10%)    0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 kube-proxy-9rx2d                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m54s
	  kube-system                 kube-scheduler-multinode-140200             100m (5%)     0 (0%)      0 (0%)           0 (0%)         4m59s
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         4m46s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 4m52s                kube-proxy       
	  Normal  NodeHasSufficientMemory  5m7s (x8 over 5m7s)  kubelet          Node multinode-140200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m7s (x8 over 5m7s)  kubelet          Node multinode-140200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m7s (x7 over 5m7s)  kubelet          Node multinode-140200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m7s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 4m59s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m59s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m59s                kubelet          Node multinode-140200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m59s                kubelet          Node multinode-140200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m59s                kubelet          Node multinode-140200 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4m55s                node-controller  Node multinode-140200 event: Registered Node multinode-140200 in Controller
	  Normal  NodeReady                4m34s                kubelet          Node multinode-140200 status is now: NodeReady
	
	
	Name:               multinode-140200-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-140200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=multinode-140200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_07T14_03_12_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 14:03:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-140200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 14:04:55 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 14:04:13 +0000   Mon, 07 Apr 2025 14:03:12 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 14:04:13 +0000   Mon, 07 Apr 2025 14:03:12 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 14:04:13 +0000   Mon, 07 Apr 2025 14:03:12 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 14:04:13 +0000   Mon, 07 Apr 2025 14:03:41 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.82.40
	  Hostname:    multinode-140200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcc15e3b5c0e48e2bb4f0703dca46560
	  System UUID:                f00434e9-33d2-e941-923c-7dd3ed460cdb
	  Boot ID:                    7b86de7f-44f1-42b1-bf68-d7c8427db9b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-vgl84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         52s
	  kube-system                 kindnet-pv67r               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      107s
	  kube-system                 kube-proxy-2r7lj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         107s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 96s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  107s (x2 over 107s)  kubelet          Node multinode-140200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    107s (x2 over 107s)  kubelet          Node multinode-140200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     107s (x2 over 107s)  kubelet          Node multinode-140200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  107s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           105s                 node-controller  Node multinode-140200-m02 event: Registered Node multinode-140200-m02 in Controller
	  Normal  NodeReady                78s                  kubelet          Node multinode-140200-m02 status is now: NodeReady
	
	
	==> dmesg <==
	[Apr 7 13:58] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +48.480220] systemd-fstab-generator[646]: Ignoring "noauto" option for root device
	[  +0.164859] systemd-fstab-generator[658]: Ignoring "noauto" option for root device
	[Apr 7 13:59] systemd-fstab-generator[1013]: Ignoring "noauto" option for root device
	[  +0.103547] kauditd_printk_skb: 69 callbacks suppressed
	[  +0.503630] systemd-fstab-generator[1051]: Ignoring "noauto" option for root device
	[  +0.164139] systemd-fstab-generator[1063]: Ignoring "noauto" option for root device
	[  +0.235125] systemd-fstab-generator[1078]: Ignoring "noauto" option for root device
	[  +2.793484] systemd-fstab-generator[1300]: Ignoring "noauto" option for root device
	[  +0.199541] systemd-fstab-generator[1312]: Ignoring "noauto" option for root device
	[  +0.190622] systemd-fstab-generator[1324]: Ignoring "noauto" option for root device
	[  +0.238453] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[ +11.026310] systemd-fstab-generator[1438]: Ignoring "noauto" option for root device
	[  +0.103266] kauditd_printk_skb: 206 callbacks suppressed
	[  +3.566801] systemd-fstab-generator[1698]: Ignoring "noauto" option for root device
	[  +6.244625] systemd-fstab-generator[1844]: Ignoring "noauto" option for root device
	[  +0.088290] kauditd_printk_skb: 74 callbacks suppressed
	[  +8.554367] systemd-fstab-generator[2281]: Ignoring "noauto" option for root device
	[  +0.132940] kauditd_printk_skb: 62 callbacks suppressed
	[Apr 7 14:00] systemd-fstab-generator[2380]: Ignoring "noauto" option for root device
	[  +0.177116] kauditd_printk_skb: 12 callbacks suppressed
	[  +8.952676] kauditd_printk_skb: 51 callbacks suppressed
	[Apr 7 14:04] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [783fd069538d] <==
	{"level":"info","ts":"2025-04-07T13:59:54.837953Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 became candidate at term 2"}
	{"level":"info","ts":"2025-04-07T13:59:54.838304Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 received MsgVoteResp from 56b92fcdf3016dd0 at term 2"}
	{"level":"info","ts":"2025-04-07T13:59:54.838720Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 became leader at term 2"}
	{"level":"info","ts":"2025-04-07T13:59:54.838833Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 56b92fcdf3016dd0 elected leader 56b92fcdf3016dd0 at term 2"}
	{"level":"info","ts":"2025-04-07T13:59:54.845332Z","caller":"etcdserver/server.go:2651","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T13:59:54.851473Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"56b92fcdf3016dd0","local-member-attributes":"{Name:multinode-140200 ClientURLs:[https://172.17.92.89:2379]}","request-path":"/0/members/56b92fcdf3016dd0/attributes","cluster-id":"cc5f18ba5e9dce7b","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T13:59:54.854323Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T13:59:54.854868Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T13:59:54.855913Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T13:59:54.856837Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.92.89:2379"}
	{"level":"info","ts":"2025-04-07T13:59:54.860167Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T13:59:54.861252Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2025-04-07T13:59:54.861456Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T13:59:54.865223Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T13:59:54.861535Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cc5f18ba5e9dce7b","local-member-id":"56b92fcdf3016dd0","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T13:59:54.865559Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T13:59:54.865800Z","caller":"etcdserver/server.go:2675","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T14:00:08.967509Z","caller":"traceutil/trace.go:171","msg":"trace[223633905] transaction","detail":"{read_only:false; response_revision:404; number_of_response:1; }","duration":"154.445719ms","start":"2025-04-07T14:00:08.813036Z","end":"2025-04-07T14:00:08.967482Z","steps":["trace[223633905] 'process raft request'  (duration: 154.247317ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T14:00:13.517580Z","caller":"traceutil/trace.go:171","msg":"trace[1424404994] transaction","detail":"{read_only:false; response_revision:409; number_of_response:1; }","duration":"114.01072ms","start":"2025-04-07T14:00:13.403529Z","end":"2025-04-07T14:00:13.517539Z","steps":["trace[1424404994] 'process raft request'  (duration: 113.847818ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T14:00:14.325181Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.534159ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/multinode-140200\" limit:1 ","response":"range_response_count:1 size:4486"}
	{"level":"info","ts":"2025-04-07T14:00:14.325432Z","caller":"traceutil/trace.go:171","msg":"trace[1067876642] range","detail":"{range_begin:/registry/minions/multinode-140200; range_end:; response_count:1; response_revision:417; }","duration":"193.906963ms","start":"2025-04-07T14:00:14.131507Z","end":"2025-04-07T14:00:14.325414Z","steps":["trace[1067876642] 'range keys from in-memory index tree'  (duration: 193.335357ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T14:00:41.263866Z","caller":"traceutil/trace.go:171","msg":"trace[1367234037] transaction","detail":"{read_only:false; response_revision:464; number_of_response:1; }","duration":"162.921626ms","start":"2025-04-07T14:00:41.100926Z","end":"2025-04-07T14:00:41.263847Z","steps":["trace[1367234037] 'process raft request'  (duration: 162.734825ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T14:03:21.908638Z","caller":"traceutil/trace.go:171","msg":"trace[930314409] transaction","detail":"{read_only:false; response_revision:624; number_of_response:1; }","duration":"151.544648ms","start":"2025-04-07T14:03:21.757036Z","end":"2025-04-07T14:03:21.908581Z","steps":["trace[930314409] 'process raft request'  (duration: 151.369347ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T14:03:22.512569Z","caller":"traceutil/trace.go:171","msg":"trace[1341431565] transaction","detail":"{read_only:false; response_revision:625; number_of_response:1; }","duration":"108.804753ms","start":"2025-04-07T14:03:22.403745Z","end":"2025-04-07T14:03:22.512550Z","steps":["trace[1341431565] 'process raft request'  (duration: 108.618951ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T14:03:27.108152Z","caller":"traceutil/trace.go:171","msg":"trace[660156910] transaction","detail":"{read_only:false; response_revision:636; number_of_response:1; }","duration":"187.653295ms","start":"2025-04-07T14:03:26.920478Z","end":"2025-04-07T14:03:27.108131Z","steps":["trace[660156910] 'process raft request'  (duration: 187.289792ms)"],"step_count":1}
	
	
	==> kernel <==
	 14:04:59 up 7 min,  0 users,  load average: 0.23, 0.40, 0.24
	Linux multinode-140200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2a1208136f15] <==
	I0407 14:03:55.725323       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:04:05.733804       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:04:05.733913       1 main.go:301] handling current node
	I0407 14:04:05.733933       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:04:05.733942       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:04:15.725004       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:04:15.725493       1 main.go:301] handling current node
	I0407 14:04:15.725632       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:04:15.725726       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:04:25.732068       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:04:25.732188       1 main.go:301] handling current node
	I0407 14:04:25.732246       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:04:25.732255       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:04:35.725013       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:04:35.725046       1 main.go:301] handling current node
	I0407 14:04:35.725064       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:04:35.725070       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:04:45.725456       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:04:45.725600       1 main.go:301] handling current node
	I0407 14:04:45.725624       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:04:45.725633       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:04:55.730380       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:04:55.730767       1 main.go:301] handling current node
	I0407 14:04:55.730854       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:04:55.730954       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [92c49129b5b0] <==
	I0407 13:59:58.274553       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0407 13:59:58.288315       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0407 13:59:58.288407       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0407 13:59:59.451565       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 13:59:59.539652       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 13:59:59.694116       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0407 13:59:59.722099       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.92.89]
	I0407 13:59:59.723572       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 13:59:59.736178       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 14:00:00.351143       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 14:00:00.505950       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 14:00:00.588251       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0407 14:00:00.613820       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 14:00:05.769762       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	I0407 14:00:05.954426       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	E0407 14:04:14.346842       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55836: use of closed network connection
	E0407 14:04:14.862796       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55838: use of closed network connection
	E0407 14:04:15.458979       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55840: use of closed network connection
	E0407 14:04:15.983750       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55842: use of closed network connection
	E0407 14:04:16.484495       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55844: use of closed network connection
	E0407 14:04:16.983160       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55846: use of closed network connection
	E0407 14:04:17.887606       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55849: use of closed network connection
	E0407 14:04:28.387093       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55851: use of closed network connection
	E0407 14:04:28.861361       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55855: use of closed network connection
	E0407 14:04:39.364128       1 conn.go:339] Error on socket receive: read tcp 172.17.92.89:8443->172.17.80.1:55857: use of closed network connection
	
	
	==> kube-controller-manager [8c615c7e0506] <==
	I0407 14:03:12.191906       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-140200-m02" podCIDRs=["10.244.1.0/24"]
	I0407 14:03:12.191997       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:12.192142       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:12.241097       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:12.261621       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:12.793662       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:14.933340       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-140200-m02"
	I0407 14:03:14.994578       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:22.514691       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:41.335938       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:41.336656       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-140200-m02"
	I0407 14:03:41.352480       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:42.794529       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:44.955082       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:03:54.654831       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200"
	I0407 14:04:07.658519       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="86.72459ms"
	I0407 14:04:07.679399       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="20.751042ms"
	I0407 14:04:07.679620       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34µs"
	I0407 14:04:07.685290       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="30.1µs"
	I0407 14:04:10.509274       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="12.510218ms"
	I0407 14:04:10.511351       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="56.801µs"
	I0407 14:04:11.548845       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="15.185244ms"
	I0407 14:04:11.548951       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="36.101µs"
	I0407 14:04:13.318277       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:04:35.293601       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200"
	
	
	==> kube-proxy [ec26042b5271] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 14:00:07.337754       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 14:00:07.466119       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.17.92.89"]
	E0407 14:00:07.466279       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 14:00:07.567557       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 14:00:07.567717       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 14:00:07.567756       1 server_linux.go:170] "Using iptables Proxier"
	I0407 14:00:07.574629       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 14:00:07.577367       1 server.go:497] "Version info" version="v1.32.2"
	I0407 14:00:07.577404       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:00:07.585487       1 config.go:199] "Starting service config controller"
	I0407 14:00:07.586284       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 14:00:07.586337       1 config.go:329] "Starting node config controller"
	I0407 14:00:07.586345       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 14:00:07.589540       1 config.go:105] "Starting endpoint slice config controller"
	I0407 14:00:07.589593       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 14:00:07.686785       1 shared_informer.go:320] Caches are synced for node config
	I0407 14:00:07.686825       1 shared_informer.go:320] Caches are synced for service config
	I0407 14:00:07.694325       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [159f6e03fef6] <==
	W0407 13:59:58.345474       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 13:59:58.345761       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.393834       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0407 13:59:58.393942       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.496079       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 13:59:58.496276       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 13:59:58.551175       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0407 13:59:58.551286       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.610381       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:58.610476       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.627893       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:58.628177       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.719927       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:58.720228       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.814245       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:58.814720       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.940493       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 13:59:58.940937       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.976373       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 13:59:58.976407       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:59.037635       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:59.038094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:59.038018       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 13:59:59.038595       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0407 14:00:00.428814       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 14:00:25 multinode-140200 kubelet[2288]: I0407 14:00:25.936629    2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rn28v\" (UniqueName: \"kubernetes.io/projected/437226ae-e63d-4245-bbea-ad5c41ff9a93-kube-api-access-rn28v\") pod \"coredns-668d6bf9bc-5fp4f\" (UID: \"437226ae-e63d-4245-bbea-ad5c41ff9a93\") " pod="kube-system/coredns-668d6bf9bc-5fp4f"
	Apr 07 14:00:28 multinode-140200 kubelet[2288]: I0407 14:00:28.216779    2288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-5fp4f" podStartSLOduration=22.216760553 podStartE2EDuration="22.216760553s" podCreationTimestamp="2025-04-07 14:00:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-07 14:00:28.182546412 +0000 UTC m=+27.834145980" watchObservedRunningTime="2025-04-07 14:00:28.216760553 +0000 UTC m=+27.868360021"
	Apr 07 14:00:28 multinode-140200 kubelet[2288]: I0407 14:00:28.235993    2288 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=15.235978645 podStartE2EDuration="15.235978645s" podCreationTimestamp="2025-04-07 14:00:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2025-04-07 14:00:28.218257268 +0000 UTC m=+27.869856836" watchObservedRunningTime="2025-04-07 14:00:28.235978645 +0000 UTC m=+27.887578113"
	Apr 07 14:01:00 multinode-140200 kubelet[2288]: E0407 14:01:00.662509    2288 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 14:01:00 multinode-140200 kubelet[2288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 14:01:00 multinode-140200 kubelet[2288]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 14:01:00 multinode-140200 kubelet[2288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 14:01:00 multinode-140200 kubelet[2288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 14:02:00 multinode-140200 kubelet[2288]: E0407 14:02:00.661278    2288 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 14:02:00 multinode-140200 kubelet[2288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 14:02:00 multinode-140200 kubelet[2288]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 14:02:00 multinode-140200 kubelet[2288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 14:02:00 multinode-140200 kubelet[2288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 14:03:00 multinode-140200 kubelet[2288]: E0407 14:03:00.661534    2288 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 14:03:00 multinode-140200 kubelet[2288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 14:03:00 multinode-140200 kubelet[2288]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 14:03:00 multinode-140200 kubelet[2288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 14:03:00 multinode-140200 kubelet[2288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 14:04:00 multinode-140200 kubelet[2288]: E0407 14:04:00.661130    2288 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 14:04:00 multinode-140200 kubelet[2288]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 14:04:00 multinode-140200 kubelet[2288]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 14:04:00 multinode-140200 kubelet[2288]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 14:04:00 multinode-140200 kubelet[2288]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 14:04:07 multinode-140200 kubelet[2288]: I0407 14:04:07.746156    2288 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cnn6l\" (UniqueName: \"kubernetes.io/projected/dcbaa934-5251-4179-bd5e-60d5d2ba403b-kube-api-access-cnn6l\") pod \"busybox-58667487b6-kt4sh\" (UID: \"dcbaa934-5251-4179-bd5e-60d5d2ba403b\") " pod="default/busybox-58667487b6-kt4sh"
	Apr 07 14:04:08 multinode-140200 kubelet[2288]: I0407 14:04:08.423955    2288 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="732344eba89aba06b2b3ca85dad8d58591f92e4fce720352bd4f94e39eb086a9"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-140200 -n multinode-140200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-140200 -n multinode-140200: (12.0861628s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-140200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/PingHostFrom2Pods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/PingHostFrom2Pods (58.44s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (405.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-140200
multinode_test.go:321: (dbg) Run:  out/minikube-windows-amd64.exe stop -p multinode-140200
E0407 14:21:55.768525    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 14:21:57.569976    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:321: (dbg) Done: out/minikube-windows-amd64.exe stop -p multinode-140200: (1m43.3837486s)
multinode_test.go:326: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-140200 --wait=true -v=8 --alsologtostderr
E0407 14:23:54.483387    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 14:26:55.770928    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:326: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p multinode-140200 --wait=true -v=8 --alsologtostderr: exit status 1 (4m22.6625093s)

                                                
                                                
-- stdout --
	* [multinode-140200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on existing profile
	* Starting "multinode-140200" primary control-plane node in "multinode-140200" cluster
	* Restarting existing hyperv VM for "multinode-140200" ...
	* Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	* Configuring CNI (Container Networking Interface) ...
	* Verifying Kubernetes components...
	* Enabled addons: 
	
	* Starting "multinode-140200-m02" worker node in "multinode-140200" cluster
	* Restarting existing hyperv VM for "multinode-140200-m02" ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 14:22:34.222515    9664 out.go:345] Setting OutFile to fd 1152 ...
	I0407 14:22:34.301515    9664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:22:34.301515    9664 out.go:358] Setting ErrFile to fd 1684...
	I0407 14:22:34.301515    9664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:22:34.325617    9664 out.go:352] Setting JSON to false
	I0407 14:22:34.334322    9664 start.go:129] hostinfo: {"hostname":"minikube3","uptime":7546,"bootTime":1744028207,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 14:22:34.334322    9664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 14:22:34.373317    9664 out.go:177] * [multinode-140200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 14:22:34.407137    9664 notify.go:220] Checking for updates...
	I0407 14:22:34.421982    9664 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:22:34.439946    9664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:22:34.470853    9664 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 14:22:34.501783    9664 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:22:34.526492    9664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:22:34.536453    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:22:34.536453    9664 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:22:40.846192    9664 out.go:177] * Using the hyperv driver based on existing profile
	I0407 14:22:40.851486    9664 start.go:297] selected driver: hyperv
	I0407 14:22:40.851486    9664 start.go:901] validating driver "hyperv" against &{Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Cluste
rName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.92.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.83.62 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:22:40.851592    9664 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:22:40.912713    9664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:22:40.912713    9664 cni.go:84] Creating CNI manager for ""
	I0407 14:22:40.912713    9664 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0407 14:22:40.913758    9664 start.go:340] cluster config:
	{Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.92.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.83.62 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:22:40.913758    9664 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:22:41.006896    9664 out.go:177] * Starting "multinode-140200" primary control-plane node in "multinode-140200" cluster
	I0407 14:22:41.015176    9664 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 14:22:41.015717    9664 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 14:22:41.016012    9664 cache.go:56] Caching tarball of preloaded images
	I0407 14:22:41.016076    9664 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 14:22:41.016609    9664 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 14:22:41.016990    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:22:41.019639    9664 start.go:360] acquireMachinesLock for multinode-140200: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:22:41.019639    9664 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-140200"
	I0407 14:22:41.020168    9664 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:22:41.020285    9664 fix.go:54] fixHost starting: 
	I0407 14:22:41.021118    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:22:44.157467    9664 main.go:141] libmachine: [stdout =====>] : Off
	
	I0407 14:22:44.157467    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:44.157467    9664 fix.go:112] recreateIfNeeded on multinode-140200: state=Stopped err=<nil>
	W0407 14:22:44.157467    9664 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:22:44.161662    9664 out.go:177] * Restarting existing hyperv VM for "multinode-140200" ...
	I0407 14:22:44.164309    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-140200
	I0407 14:22:47.584814    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:22:47.584883    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:47.584883    9664 main.go:141] libmachine: Waiting for host to start...
	I0407 14:22:47.584883    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:22:50.090728    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:22:50.090728    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:50.090968    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:22:52.947643    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:22:52.947643    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:53.948723    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:22:56.402945    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:22:56.403684    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:56.403684    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:22:59.228555    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:22:59.228555    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:00.229043    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:02.679251    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:02.679251    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:02.679251    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:05.532691    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:23:05.533502    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:06.533929    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:09.017316    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:09.017316    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:09.018369    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:11.832237    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:23:11.832454    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:12.833439    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:15.354793    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:15.354828    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:15.354907    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:18.312925    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:18.313136    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:18.316683    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:20.747224    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:20.747224    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:20.747608    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:23.705630    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:23.705681    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:23.705681    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:23:23.708663    9664 machine.go:93] provisionDockerMachine start ...
	I0407 14:23:23.708663    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:26.171802    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:26.172588    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:26.172755    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:29.098404    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:29.098492    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:29.103912    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:23:29.104615    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:23:29.105212    9664 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:23:29.254993    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:23:29.254993    9664 buildroot.go:166] provisioning hostname "multinode-140200"
	I0407 14:23:29.254993    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:31.728559    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:31.728559    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:31.729015    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:34.677540    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:34.677540    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:34.684323    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:23:34.685086    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:23:34.685086    9664 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-140200 && echo "multinode-140200" | sudo tee /etc/hostname
	I0407 14:23:34.852285    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-140200
	
	I0407 14:23:34.852285    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:37.198114    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:37.199067    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:37.199180    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:39.930183    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:39.930183    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:39.938416    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:23:39.938996    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:23:39.938996    9664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-140200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-140200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-140200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:23:40.093560    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:23:40.093560    9664 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 14:23:40.093560    9664 buildroot.go:174] setting up certificates
	I0407 14:23:40.093560    9664 provision.go:84] configureAuth start
	I0407 14:23:40.093560    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:42.265582    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:42.265665    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:42.265754    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:44.855485    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:44.855722    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:44.855835    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:47.062130    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:47.062905    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:47.062905    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:49.772804    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:49.772804    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:49.772804    9664 provision.go:143] copyHostCerts
	I0407 14:23:49.773810    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 14:23:49.774200    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 14:23:49.774320    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 14:23:49.774449    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 14:23:49.775926    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 14:23:49.776527    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 14:23:49.776690    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 14:23:49.777186    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 14:23:49.778050    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 14:23:49.778050    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 14:23:49.778050    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 14:23:49.778753    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 14:23:49.780026    9664 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-140200 san=[127.0.0.1 172.17.81.10 localhost minikube multinode-140200]
	I0407 14:23:50.115484    9664 provision.go:177] copyRemoteCerts
	I0407 14:23:50.128003    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:23:50.128174    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:52.315874    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:52.316091    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:52.316091    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:54.995282    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:54.995282    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:54.996035    9664 sshutil.go:53] new ssh client: &{IP:172.17.81.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:23:55.100399    9664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9721873s)
	I0407 14:23:55.100534    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 14:23:55.100940    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:23:55.151480    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 14:23:55.152091    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0407 14:23:55.203267    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 14:23:55.203383    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 14:23:55.254672    9664 provision.go:87] duration metric: took 15.1609973s to configureAuth
	I0407 14:23:55.254780    9664 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:23:55.255570    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:23:55.255753    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:57.521423    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:57.521423    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:57.521423    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:00.230828    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:00.232072    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:00.238451    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:00.238762    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:00.239294    9664 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 14:24:00.381497    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 14:24:00.381497    9664 buildroot.go:70] root file system type: tmpfs
	I0407 14:24:00.382032    9664 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 14:24:00.382075    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:02.567246    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:02.567246    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:02.567613    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:05.149330    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:05.149330    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:05.154945    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:05.155505    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:05.155505    9664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 14:24:05.324789    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 14:24:05.324789    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:07.539138    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:07.539404    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:07.539404    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:10.149946    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:10.149946    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:10.155051    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:10.155841    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:10.155841    9664 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 14:24:12.663888    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 14:24:12.663888    9664 machine.go:96] duration metric: took 48.9548517s to provisionDockerMachine
	I0407 14:24:12.663888    9664 start.go:293] postStartSetup for "multinode-140200" (driver="hyperv")
	I0407 14:24:12.663888    9664 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:24:12.675945    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:24:12.675945    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:14.946945    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:14.948000    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:14.948086    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:17.750016    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:17.750016    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:17.751180    9664 sshutil.go:53] new ssh client: &{IP:172.17.81.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:24:17.858791    9664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1828059s)
	I0407 14:24:17.871936    9664 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:24:17.880520    9664 command_runner.go:130] > NAME=Buildroot
	I0407 14:24:17.880520    9664 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0407 14:24:17.880581    9664 command_runner.go:130] > ID=buildroot
	I0407 14:24:17.880581    9664 command_runner.go:130] > VERSION_ID=2023.02.9
	I0407 14:24:17.880581    9664 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0407 14:24:17.880706    9664 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:24:17.880729    9664 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 14:24:17.881243    9664 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 14:24:17.882291    9664 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 14:24:17.882291    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 14:24:17.895371    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:24:17.919401    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 14:24:17.970939    9664 start.go:296] duration metric: took 5.3070107s for postStartSetup
	I0407 14:24:17.970939    9664 fix.go:56] duration metric: took 1m36.9499156s for fixHost
	I0407 14:24:17.970939    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:20.302777    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:20.302777    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:20.303438    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:22.951613    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:22.951613    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:22.958366    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:22.958951    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:22.958951    9664 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:24:23.091704    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744035863.111759143
	
	I0407 14:24:23.091704    9664 fix.go:216] guest clock: 1744035863.111759143
	I0407 14:24:23.091704    9664 fix.go:229] Guest: 2025-04-07 14:24:23.111759143 +0000 UTC Remote: 2025-04-07 14:24:17.9709393 +0000 UTC m=+103.870720701 (delta=5.140819843s)
	I0407 14:24:23.091704    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:25.402754    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:25.402754    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:25.403633    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:28.187294    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:28.187391    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:28.192560    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:28.193328    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:28.193328    9664 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744035863
	I0407 14:24:28.344101    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 14:24:23 UTC 2025
	
	I0407 14:24:28.344188    9664 fix.go:236] clock set: Mon Apr  7 14:24:23 UTC 2025
	 (err=<nil>)
	I0407 14:24:28.344188    9664 start.go:83] releasing machines lock for "multinode-140200", held for 1m47.3237311s
	I0407 14:24:28.344458    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:30.559722    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:30.559722    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:30.560108    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:33.190065    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:33.190065    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:33.196802    9664 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 14:24:33.196802    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:33.204915    9664 ssh_runner.go:195] Run: cat /version.json
	I0407 14:24:33.204915    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:35.476677    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:35.476677    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:35.476677    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:35.484852    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:35.484852    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:35.484852    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:38.291981    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:38.291981    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:38.291981    9664 sshutil.go:53] new ssh client: &{IP:172.17.81.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:24:38.316113    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:38.316946    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:38.317008    9664 sshutil.go:53] new ssh client: &{IP:172.17.81.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:24:38.394211    9664 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0407 14:24:38.394708    9664 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.197866s)
	W0407 14:24:38.394822    9664 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 14:24:38.412784    9664 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0407 14:24:38.412839    9664 ssh_runner.go:235] Completed: cat /version.json: (5.2078836s)
	I0407 14:24:38.424508    9664 ssh_runner.go:195] Run: systemctl --version
	I0407 14:24:38.433457    9664 command_runner.go:130] > systemd 252 (252)
	I0407 14:24:38.433499    9664 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0407 14:24:38.444729    9664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 14:24:38.453499    9664 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0407 14:24:38.453499    9664 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:24:38.464968    9664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:24:38.497536    9664 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0407 14:24:38.497654    9664 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 14:24:38.497654    9664 start.go:495] detecting cgroup driver to use...
	I0407 14:24:38.497654    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0407 14:24:38.499828    9664 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 14:24:38.499828    9664 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 14:24:38.534270    9664 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0407 14:24:38.546104    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 14:24:38.579011    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 14:24:38.600306    9664 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 14:24:38.613869    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 14:24:38.642635    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 14:24:38.672814    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 14:24:38.704357    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 14:24:38.734850    9664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:24:38.765012    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 14:24:38.794529    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 14:24:38.826005    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 14:24:38.853773    9664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:24:38.870623    9664 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:24:38.870979    9664 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:24:38.883895    9664 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 14:24:38.916681    9664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:24:38.942042    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:39.149380    9664 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 14:24:39.183027    9664 start.go:495] detecting cgroup driver to use...
	I0407 14:24:39.194328    9664 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 14:24:39.222670    9664 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0407 14:24:39.222670    9664 command_runner.go:130] > [Unit]
	I0407 14:24:39.222764    9664 command_runner.go:130] > Description=Docker Application Container Engine
	I0407 14:24:39.222764    9664 command_runner.go:130] > Documentation=https://docs.docker.com
	I0407 14:24:39.222764    9664 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0407 14:24:39.222840    9664 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0407 14:24:39.222840    9664 command_runner.go:130] > StartLimitBurst=3
	I0407 14:24:39.222840    9664 command_runner.go:130] > StartLimitIntervalSec=60
	I0407 14:24:39.222840    9664 command_runner.go:130] > [Service]
	I0407 14:24:39.222902    9664 command_runner.go:130] > Type=notify
	I0407 14:24:39.222902    9664 command_runner.go:130] > Restart=on-failure
	I0407 14:24:39.222902    9664 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0407 14:24:39.223009    9664 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0407 14:24:39.223009    9664 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0407 14:24:39.223009    9664 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0407 14:24:39.223009    9664 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0407 14:24:39.223009    9664 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0407 14:24:39.223009    9664 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0407 14:24:39.223233    9664 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0407 14:24:39.223233    9664 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0407 14:24:39.223301    9664 command_runner.go:130] > ExecStart=
	I0407 14:24:39.223341    9664 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0407 14:24:39.223341    9664 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0407 14:24:39.223341    9664 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0407 14:24:39.223441    9664 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0407 14:24:39.223441    9664 command_runner.go:130] > LimitNOFILE=infinity
	I0407 14:24:39.223441    9664 command_runner.go:130] > LimitNPROC=infinity
	I0407 14:24:39.223441    9664 command_runner.go:130] > LimitCORE=infinity
	I0407 14:24:39.223441    9664 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0407 14:24:39.223441    9664 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0407 14:24:39.223441    9664 command_runner.go:130] > TasksMax=infinity
	I0407 14:24:39.223441    9664 command_runner.go:130] > TimeoutStartSec=0
	I0407 14:24:39.223562    9664 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0407 14:24:39.223562    9664 command_runner.go:130] > Delegate=yes
	I0407 14:24:39.223692    9664 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0407 14:24:39.223692    9664 command_runner.go:130] > KillMode=process
	I0407 14:24:39.223692    9664 command_runner.go:130] > [Install]
	I0407 14:24:39.223692    9664 command_runner.go:130] > WantedBy=multi-user.target
	I0407 14:24:39.239764    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:24:39.272337    9664 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:24:39.318378    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:24:39.354724    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 14:24:39.388275    9664 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 14:24:39.453914    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 14:24:39.479650    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:24:39.512303    9664 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0407 14:24:39.525980    9664 ssh_runner.go:195] Run: which cri-dockerd
	I0407 14:24:39.532680    9664 command_runner.go:130] > /usr/bin/cri-dockerd
	I0407 14:24:39.544531    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 14:24:39.579586    9664 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 14:24:39.620673    9664 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 14:24:39.818163    9664 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 14:24:40.008711    9664 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 14:24:40.009015    9664 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 14:24:40.058666    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:40.263139    9664 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 14:24:42.979685    9664 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7165255s)
	I0407 14:24:42.991974    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 14:24:43.027377    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 14:24:43.062407    9664 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 14:24:43.255872    9664 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 14:24:43.453774    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:43.648518    9664 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 14:24:43.686304    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 14:24:43.719797    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:43.911950    9664 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 14:24:44.013028    9664 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 14:24:44.024339    9664 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 14:24:44.032339    9664 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0407 14:24:44.032339    9664 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0407 14:24:44.033170    9664 command_runner.go:130] > Device: 0,22	Inode: 852         Links: 1
	I0407 14:24:44.033170    9664 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0407 14:24:44.033170    9664 command_runner.go:130] > Access: 2025-04-07 14:24:43.956784007 +0000
	I0407 14:24:44.033170    9664 command_runner.go:130] > Modify: 2025-04-07 14:24:43.956784007 +0000
	I0407 14:24:44.033170    9664 command_runner.go:130] > Change: 2025-04-07 14:24:43.960784030 +0000
	I0407 14:24:44.033170    9664 command_runner.go:130] >  Birth: -
	I0407 14:24:44.033510    9664 start.go:563] Will wait 60s for crictl version
	I0407 14:24:44.045742    9664 ssh_runner.go:195] Run: which crictl
	I0407 14:24:44.051857    9664 command_runner.go:130] > /usr/bin/crictl
	I0407 14:24:44.063467    9664 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:24:44.118378    9664 command_runner.go:130] > Version:  0.1.0
	I0407 14:24:44.118480    9664 command_runner.go:130] > RuntimeName:  docker
	I0407 14:24:44.118480    9664 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0407 14:24:44.118480    9664 command_runner.go:130] > RuntimeApiVersion:  v1
	I0407 14:24:44.118661    9664 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 14:24:44.127363    9664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 14:24:44.165417    9664 command_runner.go:130] > 27.4.0
	I0407 14:24:44.176494    9664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 14:24:44.213583    9664 command_runner.go:130] > 27.4.0
	I0407 14:24:44.218574    9664 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 14:24:44.218574    9664 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 14:24:44.222577    9664 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 14:24:44.222577    9664 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 14:24:44.222577    9664 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 14:24:44.223576    9664 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 14:24:44.225580    9664 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 14:24:44.225580    9664 ip.go:214] interface addr: 172.17.80.1/20
	I0407 14:24:44.236594    9664 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 14:24:44.242639    9664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:24:44.263857    9664 kubeadm.go:883] updating cluster {Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-1
40200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.81.10 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.83.62 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 14:24:44.263857    9664 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 14:24:44.273242    9664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 14:24:44.300803    9664 command_runner.go:130] > kindest/kindnetd:v20250214-acbabc1a
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0407 14:24:44.300803    9664 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:24:44.300803    9664 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0407 14:24:44.300803    9664 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0407 14:24:44.300803    9664 docker.go:619] Images already preloaded, skipping extraction
	I0407 14:24:44.310785    9664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 14:24:44.338977    9664 command_runner.go:130] > kindest/kindnetd:v20250214-acbabc1a
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0407 14:24:44.339095    9664 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:24:44.339095    9664 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0407 14:24:44.339095    9664 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0407 14:24:44.339095    9664 cache_images.go:84] Images are preloaded, skipping loading
	I0407 14:24:44.339095    9664 kubeadm.go:934] updating node { 172.17.81.10 8443 v1.32.2 docker true true} ...
	I0407 14:24:44.339663    9664 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-140200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.81.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:24:44.348043    9664 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 14:24:44.417290    9664 command_runner.go:130] > cgroupfs
	I0407 14:24:44.417353    9664 cni.go:84] Creating CNI manager for ""
	I0407 14:24:44.417353    9664 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0407 14:24:44.417353    9664 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 14:24:44.417353    9664 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.81.10 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-140200 NodeName:multinode-140200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.81.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.81.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 14:24:44.417353    9664 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.81.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-140200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.17.81.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.81.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 14:24:44.428685    9664 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 14:24:44.447042    9664 command_runner.go:130] > kubeadm
	I0407 14:24:44.447042    9664 command_runner.go:130] > kubectl
	I0407 14:24:44.447042    9664 command_runner.go:130] > kubelet
	I0407 14:24:44.447042    9664 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 14:24:44.457777    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 14:24:44.475470    9664 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0407 14:24:44.509591    9664 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:24:44.541732    9664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0407 14:24:44.587264    9664 ssh_runner.go:195] Run: grep 172.17.81.10	control-plane.minikube.internal$ /etc/hosts
	I0407 14:24:44.593595    9664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.81.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:24:44.627624    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:44.819743    9664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:24:44.849015    9664 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200 for IP: 172.17.81.10
	I0407 14:24:44.849015    9664 certs.go:194] generating shared ca certs ...
	I0407 14:24:44.849015    9664 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:24:44.850041    9664 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 14:24:44.850514    9664 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 14:24:44.850514    9664 certs.go:256] generating profile certs ...
	I0407 14:24:44.851273    9664 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\client.key
	I0407 14:24:44.851273    9664 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.90c83a59
	I0407 14:24:44.851273    9664 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.90c83a59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.81.10]
	I0407 14:24:45.630073    9664 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.90c83a59 ...
	I0407 14:24:45.630073    9664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.90c83a59: {Name:mkf42bc21a237f89ddbd6add9d917623f245de4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:24:45.631035    9664 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.90c83a59 ...
	I0407 14:24:45.631035    9664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.90c83a59: {Name:mk2bb99af4db552e24dfaf61165a338e38686628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:24:45.633036    9664 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.90c83a59 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt
	I0407 14:24:45.649054    9664 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.90c83a59 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key
	I0407 14:24:45.650053    9664 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key
	I0407 14:24:45.650053    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0407 14:24:45.652017    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 14:24:45.652017    9664 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 14:24:45.652017    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 14:24:45.653036    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 14:24:45.653036    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 14:24:45.653036    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 14:24:45.654055    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 14:24:45.654055    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 14:24:45.654055    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 14:24:45.654055    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:45.655051    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:24:45.703834    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:24:45.750128    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:24:45.795646    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:24:45.839915    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0407 14:24:45.894262    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 14:24:45.937910    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 14:24:45.983793    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 14:24:46.037317    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 14:24:46.085937    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 14:24:46.136560    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:24:46.183619    9664 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 14:24:46.229947    9664 ssh_runner.go:195] Run: openssl version
	I0407 14:24:46.237848    9664 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0407 14:24:46.249845    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 14:24:46.281889    9664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 14:24:46.290380    9664 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 14:24:46.290471    9664 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 14:24:46.302987    9664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 14:24:46.312890    9664 command_runner.go:130] > 3ec20f2e
	I0407 14:24:46.326256    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:24:46.359033    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:24:46.389880    9664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:46.398519    9664 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:46.398605    9664 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:46.410491    9664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:46.420146    9664 command_runner.go:130] > b5213941
	I0407 14:24:46.431162    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:24:46.461651    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 14:24:46.492323    9664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 14:24:46.498263    9664 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 14:24:46.498620    9664 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 14:24:46.510533    9664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 14:24:46.519544    9664 command_runner.go:130] > 51391683
	I0407 14:24:46.531465    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 14:24:46.568502    9664 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:24:46.576497    9664 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:24:46.576613    9664 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0407 14:24:46.576613    9664 command_runner.go:130] > Device: 8,1	Inode: 7336801     Links: 1
	I0407 14:24:46.576613    9664 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0407 14:24:46.576672    9664 command_runner.go:130] > Access: 2025-04-07 13:59:48.055920369 +0000
	I0407 14:24:46.576672    9664 command_runner.go:130] > Modify: 2025-04-07 13:59:48.055920369 +0000
	I0407 14:24:46.576672    9664 command_runner.go:130] > Change: 2025-04-07 13:59:48.055920369 +0000
	I0407 14:24:46.576672    9664 command_runner.go:130] >  Birth: 2025-04-07 13:59:48.055920369 +0000
	I0407 14:24:46.588420    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 14:24:46.598132    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.609613    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 14:24:46.621307    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.633205    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 14:24:46.644280    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.655810    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 14:24:46.665305    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.678515    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 14:24:46.687622    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.702911    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 14:24:46.711879    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.711879    9664 kubeadm.go:392] StartCluster: {Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-1402
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.81.10 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.83.62 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:24:46.724383    9664 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 14:24:46.760994    9664 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 14:24:46.782638    9664 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0407 14:24:46.782638    9664 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0407 14:24:46.782638    9664 command_runner.go:130] > /var/lib/minikube/etcd:
	I0407 14:24:46.782638    9664 command_runner.go:130] > member
	I0407 14:24:46.782638    9664 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 14:24:46.782638    9664 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 14:24:46.793657    9664 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 14:24:46.810195    9664 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 14:24:46.811497    9664 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-140200" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:24:46.812203    9664 kubeconfig.go:62] C:\Users\jenkins.minikube3\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-140200" cluster setting kubeconfig missing "multinode-140200" context setting]
	I0407 14:24:46.813123    9664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:24:46.832112    9664 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:24:46.833121    9664 kapi.go:59] client config for multinode-140200: &rest.Config{Host:"https://172.17.81.10:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200/client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 14:24:46.835119    9664 cert_rotation.go:140] Starting client certificate rotation controller
	I0407 14:24:46.835119    9664 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0407 14:24:46.835119    9664 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0407 14:24:46.835119    9664 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0407 14:24:46.835119    9664 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0407 14:24:46.845125    9664 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 14:24:46.862731    9664 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0407 14:24:46.862731    9664 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0407 14:24:46.862731    9664 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0407 14:24:46.862731    9664 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0407 14:24:46.862731    9664 command_runner.go:130] >  kind: InitConfiguration
	I0407 14:24:46.862731    9664 command_runner.go:130] >  localAPIEndpoint:
	I0407 14:24:46.862731    9664 command_runner.go:130] > -  advertiseAddress: 172.17.92.89
	I0407 14:24:46.862731    9664 command_runner.go:130] > +  advertiseAddress: 172.17.81.10
	I0407 14:24:46.862731    9664 command_runner.go:130] >    bindPort: 8443
	I0407 14:24:46.862731    9664 command_runner.go:130] >  bootstrapTokens:
	I0407 14:24:46.862731    9664 command_runner.go:130] >    - groups:
	I0407 14:24:46.862731    9664 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0407 14:24:46.862731    9664 command_runner.go:130] >    name: "multinode-140200"
	I0407 14:24:46.862731    9664 command_runner.go:130] >    kubeletExtraArgs:
	I0407 14:24:46.862731    9664 command_runner.go:130] >      - name: "node-ip"
	I0407 14:24:46.862731    9664 command_runner.go:130] > -      value: "172.17.92.89"
	I0407 14:24:46.862731    9664 command_runner.go:130] > +      value: "172.17.81.10"
	I0407 14:24:46.862731    9664 command_runner.go:130] >    taints: []
	I0407 14:24:46.862731    9664 command_runner.go:130] >  ---
	I0407 14:24:46.862731    9664 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0407 14:24:46.862731    9664 command_runner.go:130] >  kind: ClusterConfiguration
	I0407 14:24:46.863112    9664 command_runner.go:130] >  apiServer:
	I0407 14:24:46.863112    9664 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.92.89"]
	I0407 14:24:46.863112    9664 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.81.10"]
	I0407 14:24:46.863112    9664 command_runner.go:130] >    extraArgs:
	I0407 14:24:46.863112    9664 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0407 14:24:46.863112    9664 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0407 14:24:46.863112    9664 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.92.89
	+  advertiseAddress: 172.17.81.10
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-140200"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.17.92.89"
	+      value: "172.17.81.10"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.92.89"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.81.10"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0407 14:24:46.863112    9664 kubeadm.go:1160] stopping kube-system containers ...
	I0407 14:24:46.872123    9664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 14:24:46.902172    9664 command_runner.go:130] > b2d29d6fc774
	I0407 14:24:46.902481    9664 command_runner.go:130] > 1e0d3f9a0f21
	I0407 14:24:46.902481    9664 command_runner.go:130] > 47eb0b16ce1d
	I0407 14:24:46.902481    9664 command_runner.go:130] > f6c740bfe5bb
	I0407 14:24:46.902481    9664 command_runner.go:130] > 2a1208136f15
	I0407 14:24:46.902481    9664 command_runner.go:130] > ec26042b5271
	I0407 14:24:46.902481    9664 command_runner.go:130] > 0d317e51cbf8
	I0407 14:24:46.902481    9664 command_runner.go:130] > 728d07c29084
	I0407 14:24:46.902481    9664 command_runner.go:130] > 8c615c7e0506
	I0407 14:24:46.902578    9664 command_runner.go:130] > 159f6e03fef6
	I0407 14:24:46.902578    9664 command_runner.go:130] > 783fd069538d
	I0407 14:24:46.902578    9664 command_runner.go:130] > 92c49129b5b0
	I0407 14:24:46.902611    9664 command_runner.go:130] > 50c1342f8214
	I0407 14:24:46.902611    9664 command_runner.go:130] > d7cc03773793
	I0407 14:24:46.902611    9664 command_runner.go:130] > 8bd2f8fc3a28
	I0407 14:24:46.902611    9664 command_runner.go:130] > ad64d975eb39
	I0407 14:24:46.902664    9664 docker.go:483] Stopping containers: [b2d29d6fc774 1e0d3f9a0f21 47eb0b16ce1d f6c740bfe5bb 2a1208136f15 ec26042b5271 0d317e51cbf8 728d07c29084 8c615c7e0506 159f6e03fef6 783fd069538d 92c49129b5b0 50c1342f8214 d7cc03773793 8bd2f8fc3a28 ad64d975eb39]
	I0407 14:24:46.910589    9664 ssh_runner.go:195] Run: docker stop b2d29d6fc774 1e0d3f9a0f21 47eb0b16ce1d f6c740bfe5bb 2a1208136f15 ec26042b5271 0d317e51cbf8 728d07c29084 8c615c7e0506 159f6e03fef6 783fd069538d 92c49129b5b0 50c1342f8214 d7cc03773793 8bd2f8fc3a28 ad64d975eb39
	I0407 14:24:46.940247    9664 command_runner.go:130] > b2d29d6fc774
	I0407 14:24:46.940247    9664 command_runner.go:130] > 1e0d3f9a0f21
	I0407 14:24:46.940247    9664 command_runner.go:130] > 47eb0b16ce1d
	I0407 14:24:46.940247    9664 command_runner.go:130] > f6c740bfe5bb
	I0407 14:24:46.940247    9664 command_runner.go:130] > 2a1208136f15
	I0407 14:24:46.940247    9664 command_runner.go:130] > ec26042b5271
	I0407 14:24:46.940247    9664 command_runner.go:130] > 0d317e51cbf8
	I0407 14:24:46.940247    9664 command_runner.go:130] > 728d07c29084
	I0407 14:24:46.940247    9664 command_runner.go:130] > 8c615c7e0506
	I0407 14:24:46.940247    9664 command_runner.go:130] > 159f6e03fef6
	I0407 14:24:46.940247    9664 command_runner.go:130] > 783fd069538d
	I0407 14:24:46.940247    9664 command_runner.go:130] > 92c49129b5b0
	I0407 14:24:46.940247    9664 command_runner.go:130] > 50c1342f8214
	I0407 14:24:46.940247    9664 command_runner.go:130] > d7cc03773793
	I0407 14:24:46.940247    9664 command_runner.go:130] > 8bd2f8fc3a28
	I0407 14:24:46.940247    9664 command_runner.go:130] > ad64d975eb39
	I0407 14:24:46.951475    9664 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 14:24:46.988707    9664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:24:47.005616    9664 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0407 14:24:47.005677    9664 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0407 14:24:47.005677    9664 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0407 14:24:47.005677    9664 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:24:47.005677    9664 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:24:47.005677    9664 kubeadm.go:157] found existing configuration files:
	
	I0407 14:24:47.015905    9664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:24:47.032555    9664 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:24:47.033685    9664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:24:47.046729    9664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:24:47.076609    9664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:24:47.093621    9664 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:24:47.094645    9664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:24:47.105621    9664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:24:47.135971    9664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:24:47.156962    9664 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:24:47.156962    9664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:24:47.172736    9664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:24:47.204155    9664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:24:47.222763    9664 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:24:47.223059    9664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:24:47.233762    9664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:24:47.265209    9664 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:24:47.285673    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:47.587507    9664 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using the existing "sa" key
	I0407 14:24:47.587829    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:48.874076    9664 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:24:48.874257    9664 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2863132s)
	I0407 14:24:48.874257    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:49.185460    9664 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:24:49.185460    9664 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:24:49.185460    9664 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0407 14:24:49.185579    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:49.273614    9664 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:24:49.273700    9664 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:24:49.273789    9664 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:24:49.273789    9664 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:24:49.273865    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:49.361824    9664 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:24:49.361824    9664 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:24:49.371833    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:49.873791    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:50.375336    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:50.877729    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:51.376820    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:51.405603    9664 command_runner.go:130] > 1986
	I0407 14:24:51.405603    9664 api_server.go:72] duration metric: took 2.0437631s to wait for apiserver process to appear ...
	I0407 14:24:51.405603    9664 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:24:51.405603    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:54.307929    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:24:54.307929    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:24:54.307929    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:54.395698    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:24:54.395698    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:24:54.406476    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:54.494240    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:24:54.494240    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:24:54.906650    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:54.915631    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:24:54.915688    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:24:55.406297    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:55.423321    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:24:55.423321    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:24:55.906973    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:55.917283    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:24:55.917283    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:24:56.407008    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:56.415527    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 200:
	ok
	I0407 14:24:56.415527    9664 discovery_client.go:658] "Request Body" body=""
	I0407 14:24:56.415527    9664 round_trippers.go:470] GET https://172.17.81.10:8443/version
	I0407 14:24:56.415527    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:56.415527    9664 round_trippers.go:480]     Accept: application/json, */*
	I0407 14:24:56.415527    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:56.431760    9664 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0407 14:24:56.431836    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:56 GMT
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Audit-Id: e9888871-2152-473f-84fc-74747ca3c545
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Content-Type: application/json
	I0407 14:24:56.431836    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:56.431836    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Content-Length: 263
	I0407 14:24:56.431836    9664 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0407 14:24:56.431836    9664 api_server.go:141] control plane version: v1.32.2
	I0407 14:24:56.431836    9664 api_server.go:131] duration metric: took 5.0261939s to wait for apiserver health ...
	I0407 14:24:56.431836    9664 cni.go:84] Creating CNI manager for ""
	I0407 14:24:56.431836    9664 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0407 14:24:56.435498    9664 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0407 14:24:56.449498    9664 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0407 14:24:56.458913    9664 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0407 14:24:56.459004    9664 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0407 14:24:56.459004    9664 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0407 14:24:56.459004    9664 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0407 14:24:56.459004    9664 command_runner.go:130] > Access: 2025-04-07 14:23:14.562263100 +0000
	I0407 14:24:56.459116    9664 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0407 14:24:56.459116    9664 command_runner.go:130] > Change: 2025-04-07 14:23:05.746000000 +0000
	I0407 14:24:56.459116    9664 command_runner.go:130] >  Birth: -
	I0407 14:24:56.459222    9664 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0407 14:24:56.459288    9664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0407 14:24:56.520006    9664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0407 14:24:57.879996    9664 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0407 14:24:57.880066    9664 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0407 14:24:57.880066    9664 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0407 14:24:57.880066    9664 command_runner.go:130] > daemonset.apps/kindnet configured
	I0407 14:24:57.880066    9664 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3600489s)
	I0407 14:24:57.880132    9664 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:24:57.880344    9664 type.go:204] "Request Body" body=""
	I0407 14:24:57.880480    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:24:57.880480    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:57.880480    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:57.880480    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:57.888444    9664 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:24:57.888444    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:57.888444    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:57.888444    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:57.888444    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:57 GMT
	I0407 14:24:57.888444    9664 round_trippers.go:587]     Audit-Id: 22d378a1-fcfc-425a-8ac5-a9c2887ed740
	I0407 14:24:57.888444    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:57.888444    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:57.891432    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 80 f2 03 0a  0a 0a 00 12 04 31 39 35  |ist..........195|
		00000020  32 1a 00 12 80 29 0a 99  19 0a 18 63 6f 72 65 64  |2....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 31  |-ad5c41ff9a932.1|
		00000090  39 32 32 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |9228.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 313865 chars]
	 >
	I0407 14:24:57.893438    9664 system_pods.go:59] 12 kube-system pods found
	I0407 14:24:57.893438    9664 system_pods.go:61] "coredns-668d6bf9bc-5fp4f" [437226ae-e63d-4245-bbea-ad5c41ff9a93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 14:24:57.894472    9664 system_pods.go:61] "etcd-multinode-140200" [50e84c56-5d78-4a51-bd63-4a724ccd5fd8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kindnet-pv67r" [5f3d17bc-3df2-48f9-9840-641673243750] Running
	I0407 14:24:57.894472    9664 system_pods.go:61] "kindnet-rnp2q" [e28e853b-b703-4a36-90d2-3af1a37e74e0] Running
	I0407 14:24:57.894472    9664 system_pods.go:61] "kindnet-zkw9q" [123858da-6f70-4b10-b38e-bd930d21dbe4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-apiserver-multinode-140200" [144753dc-c621-45f7-a94a-8b3835eebb12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-controller-manager-multinode-140200" [a7c6e3bb-197c-434e-9f19-74d7e48b50de] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-proxy-2r7lj" [4892d703-fc43-4f67-8493-eaeae8c5e765] Running
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-proxy-9rx2d" [2eaab25d-fe0b-4c48-ac6b-42095f5fbce6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-proxy-kvg58" [ba8a332c-bb4a-4e9c-9a4e-2c578bdc99c1] Running
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-scheduler-multinode-140200" [88dfeee8-a3c1-485b-abfe-9eaf0057d6cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 14:24:57.894472    9664 system_pods.go:61] "storage-provisioner" [01df03d8-8816-480c-941b-180069d26997] Running
	I0407 14:24:57.894472    9664 system_pods.go:74] duration metric: took 14.3397ms to wait for pod list to return data ...
	I0407 14:24:57.894472    9664 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:24:57.895425    9664 type.go:204] "Request Body" body=""
	I0407 14:24:57.895425    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes
	I0407 14:24:57.895425    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:57.895425    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:57.895425    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:57.912638    9664 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0407 14:24:57.912638    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:57.912882    9664 round_trippers.go:587]     Audit-Id: daa46545-5d5a-4a1c-9beb-7435a82319f5
	I0407 14:24:57.912882    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:57.912882    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:57.912882    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:57.912882    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:57.912882    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:57 GMT
	I0407 14:24:57.913087    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 b1 5f 0a  0a 0a 00 12 04 31 39 35  |List.._......195|
		00000020  33 1a 00 12 99 26 0a 86  12 0a 10 6d 75 6c 74 69  |3....&.....multi|
		00000030  6e 6f 64 65 2d 31 34 30  32 30 30 12 00 1a 00 22  |node-140200...."|
		00000040  00 2a 24 31 66 35 33 62  34 63 64 2d 61 62 30 31  |.*$1f53b4cd-ab01|
		00000050  2d 34 32 63 61 2d 61 36  61 36 2d 61 39 33 65 66  |-42ca-a6a6-a93ef|
		00000060  63 39 62 64 34 64 66 32  04 31 39 34 39 38 00 42  |c9bd4df2.19498.B|
		00000070  08 08 dd b4 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 59407 chars]
	 >
	I0407 14:24:57.913785    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:24:57.913850    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:24:57.913913    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:24:57.913913    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:24:57.913913    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:24:57.913913    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:24:57.913913    9664 node_conditions.go:105] duration metric: took 19.4413ms to run NodePressure ...
	I0407 14:24:57.913913    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:58.482931    9664 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0407 14:24:58.483053    9664 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0407 14:24:58.483131    9664 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0407 14:24:58.483338    9664 type.go:204] "Request Body" body=""
	I0407 14:24:58.483536    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0407 14:24:58.483590    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.483590    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.483590    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.488617    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:58.488617    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.488617    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.488617    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.488679    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.488679    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.488679    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.488679    9664 round_trippers.go:587]     Audit-Id: 5ccbbc91-0502-4dcc-aeb1-90a4d3b77a42
	I0407 14:24:58.489892    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 d1 bc 01 0a  0a 0a 00 12 04 31 39 36  |ist..........196|
		00000020  39 1a 00 12 97 2d 0a d5  1a 0a 15 65 74 63 64 2d  |9....-.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 31 34 30 32 30 30  |multinode-140200|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 35 30 65 38 34  63 35 36 2d 35 64 37 38  |.*$50e84c56-5d78|
		00000060  2d 34 61 35 31 2d 62 64  36 33 2d 34 61 37 32 34  |-4a51-bd63-4a724|
		00000070  63 63 64 35 66 64 38 32  04 31 39 31 32 38 00 42  |ccd5fd82.19128.B|
		00000080  08 08 b7 c0 cf bf 06 10  00 5a 11 0a 09 63 6f 6d  |.........Z...com|
		00000090  70 6f 6e 65 6e 74 12 04  65 74 63 64 5a 15 0a 04  |ponent..etcdZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 4d 0a 30 6b  75 62 65 61 64 6d 2e 6b  |anebM.0kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 65 74 63  |ubernetes.io/et [truncated 118341 chars]
	 >
	I0407 14:24:58.490300    9664 kubeadm.go:739] kubelet initialised
	I0407 14:24:58.490378    9664 kubeadm.go:740] duration metric: took 7.2469ms waiting for restarted kubelet to initialise ...
	I0407 14:24:58.490405    9664 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:24:58.490405    9664 type.go:204] "Request Body" body=""
	I0407 14:24:58.490405    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:24:58.490405    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.490405    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.490405    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.495727    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:24:58.495768    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.495768    9664 round_trippers.go:587]     Audit-Id: a0d36550-e66c-4556-b8be-272016a1b460
	I0407 14:24:58.495768    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.495768    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.495768    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.495768    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.495768    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.498361    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 a4 ef 03 0a  0a 0a 00 12 04 31 39 36  |ist..........196|
		00000020  39 1a 00 12 80 29 0a 99  19 0a 18 63 6f 72 65 64  |9....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 31  |-ad5c41ff9a932.1|
		00000090  39 32 32 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |9228.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 312131 chars]
	 >
	I0407 14:24:58.498361    9664 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.498361    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.498361    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:24:58.498361    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.499363    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.499363    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.501361    9664 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 14:24:58.501361    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.501361    9664 round_trippers.go:587]     Audit-Id: 71f32bdb-8717-42a6-a203-adfaf9de1617
	I0407 14:24:58.501361    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.501361    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.501361    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.501361    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.501361    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.502383    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  80 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 04 31 39 32 32 38  |c41ff9a932.19228|
		00000080  00 42 08 08 e6 b4 cf bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25036 chars]
	 >
	I0407 14:24:58.502383    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.502383    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:58.502383    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.502383    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.502383    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.504366    9664 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 14:24:58.505371    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.505371    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.505371    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.505371    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.505371    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.505426    9664 round_trippers.go:587]     Audit-Id: bbe0d86b-3fd6-4195-afba-946e63c7d275
	I0407 14:24:58.505426    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.505426    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:58.505426    9664 pod_ready.go:98] node "multinode-140200" hosting pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.505426    9664 pod_ready.go:82] duration metric: took 7.0645ms for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:58.505426    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.505426    9664 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.505426    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.505991    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-140200
	I0407 14:24:58.505991    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.505991    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.505991    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.507666    9664 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 14:24:58.507666    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.507666    9664 round_trippers.go:587]     Audit-Id: 1a4b46d3-b9e2-4f73-9ab1-78a131a71898
	I0407 14:24:58.507666    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.507666    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.507666    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.507666    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.507666    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.508661    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  97 2d 0a d5 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.-.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 31 34  30 32 30 30 12 00 1a 0b  |inode-140200....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 35  |kube-system".*$5|
		00000040  30 65 38 34 63 35 36 2d  35 64 37 38 2d 34 61 35  |0e84c56-5d78-4a5|
		00000050  31 2d 62 64 36 33 2d 34  61 37 32 34 63 63 64 35  |1-bd63-4a724ccd5|
		00000060  66 64 38 32 04 31 39 31  32 38 00 42 08 08 b7 c0  |fd82.19128.B....|
		00000070  cf bf 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4d 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |M.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 27650 chars]
	 >
	I0407 14:24:58.508661    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.508661    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:58.508661    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.508661    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.508661    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.518658    9664 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 14:24:58.518658    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.518658    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.518658    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.518658    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.518658    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.518658    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.518658    9664 round_trippers.go:587]     Audit-Id: 3548c9a5-cb6e-4cef-a1ad-881a049000f1
	I0407 14:24:58.519659    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:58.519659    9664 pod_ready.go:98] node "multinode-140200" hosting pod "etcd-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.519659    9664 pod_ready.go:82] duration metric: took 14.2335ms for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:58.519659    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "etcd-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.519659    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.519659    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.519659    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-140200
	I0407 14:24:58.519659    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.519659    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.519659    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.522754    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:24:58.522820    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.522820    9664 round_trippers.go:587]     Audit-Id: decdf752-1ebc-442e-ac3f-4715af91eeb2
	I0407 14:24:58.522820    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.522820    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.522820    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.522820    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.522820    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.523253    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  db 36 0a e5 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.6.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 31 34 34 37 35 33 64  |ystem".*$144753d|
		00000050  63 2d 63 36 32 31 2d 34  35 66 37 2d 61 39 34 61  |c-c621-45f7-a94a|
		00000060  2d 38 62 33 38 33 35 65  65 62 62 31 32 32 04 31  |-8b3835eebb122.1|
		00000070  39 33 38 38 00 42 08 08  b7 c0 cf bf 06 10 00 5a  |9388.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 54 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebT.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 33721 chars]
	 >
	I0407 14:24:58.523253    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.523253    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:58.523253    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.523253    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.523253    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.525641    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:24:58.525730    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.525730    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.525730    9664 round_trippers.go:587]     Audit-Id: 889d9163-3962-495e-91f0-1fc48bda4632
	I0407 14:24:58.525730    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.525730    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.525730    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.525730    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.525730    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:58.526313    9664 pod_ready.go:98] node "multinode-140200" hosting pod "kube-apiserver-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.526313    9664 pod_ready.go:82] duration metric: took 6.6532ms for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:58.526313    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "kube-apiserver-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.526313    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.526313    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.526313    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-140200
	I0407 14:24:58.526313    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.526313    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.526313    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.529335    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:24:58.529335    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.529335    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.529335    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.529335    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.529335    9664 round_trippers.go:587]     Audit-Id: 8847b9ad-fc10-44e9-bc10-cd29fa5ace23
	I0407 14:24:58.529400    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.529400    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.529400    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a7 33 0a d3 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 31 34 30 32 30 30 12  |ultinode-140200.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 61 37 63 36 65 33  62 62 2d 31 39 37 63 2d  |*$a7c6e3bb-197c-|
		00000060  34 33 34 65 2d 39 66 31  39 2d 37 34 64 37 65 34  |434e-9f19-74d7e4|
		00000070  38 62 35 30 64 65 32 04  31 39 31 36 38 00 42 08  |8b50de2.19168.B.|
		00000080  08 e0 b4 cf bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31521 chars]
	 >
	I0407 14:24:58.529929    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.529980    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:58.529980    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.529980    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.529980    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.532273    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:24:58.532273    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.532273    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.532273    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.532273    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.532273    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.532273    9664 round_trippers.go:587]     Audit-Id: f2b41d93-47a7-47de-bd48-133fc3874d24
	I0407 14:24:58.532273    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.532368    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:58.532368    9664 pod_ready.go:98] node "multinode-140200" hosting pod "kube-controller-manager-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.532368    9664 pod_ready.go:82] duration metric: took 6.0549ms for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:58.532368    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "kube-controller-manager-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.532368    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.532904    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.683517    9664 request.go:661] Waited for 150.6121ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:24:58.683979    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:24:58.683979    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.683979    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.683979    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.687869    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:24:58.687977    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.687977    9664 round_trippers.go:587]     Audit-Id: c51293cf-a4e0-411a-b86d-7c21c848493b
	I0407 14:24:58.687977    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.687977    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.687977    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.687977    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.687977    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.688387    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a0 25 0a be 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 32 72 37 6c 6a 12  0b 6b 75 62 65 2d 70 72  |y-2r7lj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 34 38 39  32 64 37 30 33 2d 66 63  |m".*$4892d703-fc|
		00000050  34 33 2d 34 66 36 37 2d  38 34 39 33 2d 65 61 65  |43-4f67-8493-eae|
		00000060  61 65 38 63 35 65 37 36  35 32 03 36 33 32 38 00  |ae8c5e7652.6328.|
		00000070  42 08 08 a0 b6 cf bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22666 chars]
	 >
	I0407 14:24:58.688645    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.883647    9664 request.go:661] Waited for 195.001ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:24:58.884127    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:24:58.884127    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.884127    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.884127    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.887541    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:24:58.887541    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Content-Length: 3463
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Audit-Id: 5b7d7e48-20ed-4708-a742-9d906f8ce484
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.887633    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.887633    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.887869    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f0 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 04 31 37 38 32 38 00  |f2f300172.17828.|
		00000060  42 08 08 a0 b6 cf bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16110 chars]
	 >
	I0407 14:24:58.888103    9664 pod_ready.go:93] pod "kube-proxy-2r7lj" in "kube-system" namespace has status "Ready":"True"
	I0407 14:24:58.888103    9664 pod_ready.go:82] duration metric: took 355.7322ms for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.888179    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.888314    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.083862    9664 request.go:661] Waited for 195.5228ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:24:59.083862    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:24:59.084288    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.084288    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.084288    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.088499    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:59.088582    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.088582    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.088582    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.088582    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.088582    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.088582    9664 round_trippers.go:587]     Audit-Id: b5ebf68f-ef34-4168-bcae-f7306cc68792
	I0407 14:24:59.088582    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.088750    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  87 26 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 39 72 78 32 64 12  0b 6b 75 62 65 2d 70 72  |y-9rx2d..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 32 65 61  61 62 32 35 64 2d 66 65  |m".*$2eaab25d-fe|
		00000050  30 62 2d 34 63 34 38 2d  61 63 36 62 2d 34 32 30  |0b-4c48-ac6b-420|
		00000060  39 35 66 35 66 62 63 65  36 32 04 31 39 36 35 38  |95f5fbce62.19658|
		00000070  00 42 08 08 e5 b4 cf bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23147 chars]
	 >
	I0407 14:24:59.089279    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.284200    9664 request.go:661] Waited for 194.9195ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:59.284913    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:59.284913    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.284913    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.284913    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.291290    9664 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:24:59.291401    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.291401    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.291401    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.291401    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.291401    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.291401    9664 round_trippers.go:587]     Audit-Id: 4912de46-39a7-4e90-b37f-100477e9d131
	I0407 14:24:59.291401    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.291401    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:59.292119    9664 pod_ready.go:98] node "multinode-140200" hosting pod "kube-proxy-9rx2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:59.292119    9664 pod_ready.go:82] duration metric: took 403.937ms for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:59.292119    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "kube-proxy-9rx2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:59.292119    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kvg58" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:59.292230    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.484482    9664 request.go:661] Waited for 192.2501ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kvg58
	I0407 14:24:59.484482    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kvg58
	I0407 14:24:59.484482    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.484482    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.484482    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.489200    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:59.489200    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.489272    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.489272    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.489272    9664 round_trippers.go:587]     Audit-Id: 3b0fee65-1040-4e7e-9097-0d8ea8407b2b
	I0407 14:24:59.489272    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.489272    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.489272    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.489625    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a5 26 0a c2 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 6b 76 67 35 38 12  0b 6b 75 62 65 2d 70 72  |y-kvg58..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 61 38  61 33 33 32 63 2d 62 62  |m".*$ba8a332c-bb|
		00000050  34 61 2d 34 65 39 63 2d  39 61 34 65 2d 32 63 35  |4a-4e9c-9a4e-2c5|
		00000060  37 38 62 64 63 39 39 63  31 32 04 31 38 33 36 38  |78bdc99c12.18368|
		00000070  00 42 08 08 c8 b8 cf bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23303 chars]
	 >
	I0407 14:24:59.490005    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.684378    9664 request.go:661] Waited for 194.3711ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m03
	I0407 14:24:59.684378    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m03
	I0407 14:24:59.684378    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.684378    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.684378    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.689353    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:59.689353    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.689353    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.689353    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Content-Length: 3882
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Audit-Id: 8d63d21a-8709-4d3a-bb59-9321d0f8c2d0
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.689881    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 93 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 33 12 00 1a 00  |e-140200-m03....|
		00000030  22 00 2a 24 64 33 34 31  65 64 66 63 2d 36 33 31  |".*$d341edfc-631|
		00000040  35 2d 34 62 37 62 2d 38  33 30 34 2d 66 39 32 62  |5-4b7b-8304-f92b|
		00000050  63 34 32 31 32 65 39 33  32 04 31 39 35 32 38 00  |c4212e932.19528.|
		00000060  42 08 08 89 be cf bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18167 chars]
	 >
	I0407 14:24:59.690143    9664 pod_ready.go:98] node "multinode-140200-m03" hosting pod "kube-proxy-kvg58" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200-m03" has status "Ready":"Unknown"
	I0407 14:24:59.690143    9664 pod_ready.go:82] duration metric: took 398.021ms for pod "kube-proxy-kvg58" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:59.690143    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200-m03" hosting pod "kube-proxy-kvg58" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200-m03" has status "Ready":"Unknown"
	I0407 14:24:59.690143    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:59.690143    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.884710    9664 request.go:661] Waited for 194.5656ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:24:59.885265    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:24:59.885454    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.885454    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.885454    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.890245    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:59.890332    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.890332    9664 round_trippers.go:587]     Audit-Id: 9eb4185b-6080-422f-bfc0-0a6476ac1505
	I0407 14:24:59.890332    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.890394    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.890394    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.890394    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.890394    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.890713    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a0 25 0a bb 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.%.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 38 38 64 66 65 65 65  |ystem".*$88dfeee|
		00000050  38 2d 61 33 63 31 2d 34  38 35 62 2d 61 62 66 65  |8-a3c1-485b-abfe|
		00000060  2d 39 65 61 66 30 30 35  37 64 36 63 66 32 04 31  |-9eaf0057d6cf2.1|
		00000070  39 30 38 38 00 42 08 08  e0 b4 cf bf 06 10 00 5a  |9088.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 22666 chars]
	 >
	I0407 14:24:59.890996    9664 type.go:168] "Request Body" body=""
	I0407 14:25:00.083879    9664 request.go:661] Waited for 192.8823ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:00.084495    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:00.084495    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:00.084495    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:00.084495    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:00.089561    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:00.089561    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:00.089561    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:00 GMT
	I0407 14:25:00.089561    9664 round_trippers.go:587]     Audit-Id: 5a1fc21a-6f71-450c-95a0-cc1a6067d95d
	I0407 14:25:00.089561    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:00.089561    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:00.089561    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:00.089561    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:00.089942    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:00.090176    9664 pod_ready.go:98] node "multinode-140200" hosting pod "kube-scheduler-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:25:00.090176    9664 pod_ready.go:82] duration metric: took 400.0294ms for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	E0407 14:25:00.090176    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "kube-scheduler-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:25:00.090176    9664 pod_ready.go:39] duration metric: took 1.599758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:25:00.090176    9664 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 14:25:00.110151    9664 command_runner.go:130] > -16
	I0407 14:25:00.110151    9664 ops.go:34] apiserver oom_adj: -16
	I0407 14:25:00.110315    9664 kubeadm.go:597] duration metric: took 13.3275744s to restartPrimaryControlPlane
	I0407 14:25:00.110315    9664 kubeadm.go:394] duration metric: took 13.3983331s to StartCluster
	I0407 14:25:00.110315    9664 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:25:00.110387    9664 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:25:00.112220    9664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:25:00.113757    9664 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.81.10 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 14:25:00.113757    9664 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 14:25:00.114273    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:25:00.118071    9664 out.go:177] * Verifying Kubernetes components...
	I0407 14:25:00.120740    9664 out.go:177] * Enabled addons: 
	I0407 14:25:00.126386    9664 addons.go:514] duration metric: took 12.6283ms for enable addons: enabled=[]
	I0407 14:25:00.134265    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:25:00.400679    9664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:25:00.425441    9664 node_ready.go:35] waiting up to 6m0s for node "multinode-140200" to be "Ready" ...
	I0407 14:25:00.425441    9664 type.go:168] "Request Body" body=""
	I0407 14:25:00.425441    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:00.425441    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:00.425441    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:00.425441    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:00.430472    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:00.430472    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:00.430472    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:00 GMT
	I0407 14:25:00.430472    9664 round_trippers.go:587]     Audit-Id: aabdb8a9-d716-4491-930f-2f847139840c
	I0407 14:25:00.430472    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:00.430472    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:00.430608    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:00.430608    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:00.430879    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:00.925788    9664 type.go:168] "Request Body" body=""
	I0407 14:25:00.925788    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:00.925788    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:00.925788    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:00.925788    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:00.930499    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:00.930593    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:00.930593    9664 round_trippers.go:587]     Audit-Id: 62aef4e6-1b18-47b2-b52a-6919765d656a
	I0407 14:25:00.930593    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:00.930593    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:00.930593    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:00.930593    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:00.930593    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:00 GMT
	I0407 14:25:00.930801    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:01.425699    9664 type.go:168] "Request Body" body=""
	I0407 14:25:01.425699    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:01.425699    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:01.425699    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:01.425699    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:01.430342    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:01.430342    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:01.430443    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:01.430443    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:01 GMT
	I0407 14:25:01.430443    9664 round_trippers.go:587]     Audit-Id: bfaee13d-9a70-453c-9f17-334c020b066b
	I0407 14:25:01.430443    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:01.430443    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:01.430443    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:01.430904    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:01.925956    9664 type.go:168] "Request Body" body=""
	I0407 14:25:01.925956    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:01.925956    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:01.925956    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:01.925956    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:01.931123    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:01.931200    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:01.931200    9664 round_trippers.go:587]     Audit-Id: 6b8ad517-a924-4514-95b1-71846a38b965
	I0407 14:25:01.931200    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:01.931200    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:01.931255    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:01.931255    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:01.931255    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:01 GMT
	I0407 14:25:01.931599    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:02.425806    9664 type.go:168] "Request Body" body=""
	I0407 14:25:02.425806    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:02.425806    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:02.425806    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:02.425806    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:02.430726    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:02.430866    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:02.430866    9664 round_trippers.go:587]     Audit-Id: e12334c6-a2d8-41dd-8d6a-ccd5895165d8
	I0407 14:25:02.430866    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:02.430866    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:02.430866    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:02.430915    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:02.430915    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:02 GMT
	I0407 14:25:02.431016    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:02.431016    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:02.926578    9664 type.go:168] "Request Body" body=""
	I0407 14:25:02.926578    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:02.926578    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:02.926578    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:02.926578    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:02.931394    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:02.931394    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:02.931394    9664 round_trippers.go:587]     Audit-Id: 15696a70-5440-47a7-8e90-936f91eac45c
	I0407 14:25:02.931394    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:02.931394    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:02.931394    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:02.931394    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:02.931394    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:02 GMT
	I0407 14:25:02.932088    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:03.426549    9664 type.go:168] "Request Body" body=""
	I0407 14:25:03.427211    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:03.427211    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:03.427211    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:03.427211    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:03.435708    9664 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0407 14:25:03.435708    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:03.435841    9664 round_trippers.go:587]     Audit-Id: 589e825f-d3a2-47c4-a1e3-31b039a1ee65
	I0407 14:25:03.435841    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:03.435841    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:03.435841    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:03.435841    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:03.435841    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:03 GMT
	I0407 14:25:03.436158    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:03.925626    9664 type.go:168] "Request Body" body=""
	I0407 14:25:03.925626    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:03.925626    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:03.925626    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:03.925626    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:03.931299    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:03.931395    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:03.931395    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:03.931395    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:03.931395    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:03.931395    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:03.931479    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:03 GMT
	I0407 14:25:03.931479    9664 round_trippers.go:587]     Audit-Id: 72ce6fa3-8d21-4487-ad2f-396ba17685b7
	I0407 14:25:03.931794    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:04.426305    9664 type.go:168] "Request Body" body=""
	I0407 14:25:04.426305    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:04.426305    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:04.426305    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:04.426305    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:04.433014    9664 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:25:04.433014    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:04.433014    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:04.433014    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:04 GMT
	I0407 14:25:04.433014    9664 round_trippers.go:587]     Audit-Id: 96661bd7-8ec3-4f7a-a28d-b2921bfdee96
	I0407 14:25:04.433014    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:04.433014    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:04.433014    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:04.433014    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:04.433567    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:04.925669    9664 type.go:168] "Request Body" body=""
	I0407 14:25:04.925669    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:04.925669    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:04.925669    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:04.925669    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:04.932988    9664 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:25:04.933018    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:04.933018    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:04 GMT
	I0407 14:25:04.933018    9664 round_trippers.go:587]     Audit-Id: f99a7bf1-2469-4aca-8617-6ab5b44b8a03
	I0407 14:25:04.933018    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:04.933018    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:04.933018    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:04.933018    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:04.933018    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:05.426505    9664 type.go:168] "Request Body" body=""
	I0407 14:25:05.427190    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:05.427190    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:05.427190    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:05.427190    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:05.431857    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:05.432239    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:05.432239    9664 round_trippers.go:587]     Audit-Id: 15a9fde2-9288-4da4-982a-cfa791fefcbf
	I0407 14:25:05.432239    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:05.432239    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:05.432239    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:05.432239    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:05.432239    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:05 GMT
	I0407 14:25:05.433876    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:05.925969    9664 type.go:168] "Request Body" body=""
	I0407 14:25:05.925969    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:05.925969    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:05.925969    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:05.925969    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:05.931166    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:05.931198    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:05.931198    9664 round_trippers.go:587]     Audit-Id: 3d6c54fb-d40d-48f0-8715-ec940ebd6700
	I0407 14:25:05.931198    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:05.931198    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:05.931198    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:05.931198    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:05.931198    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:05 GMT
	I0407 14:25:05.931566    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:06.426412    9664 type.go:168] "Request Body" body=""
	I0407 14:25:06.426412    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:06.426412    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:06.426412    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:06.426412    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:06.431915    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:06.431915    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:06.431915    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:06 GMT
	I0407 14:25:06.431915    9664 round_trippers.go:587]     Audit-Id: 9b213c6a-a4ab-496e-bbe2-2dd0fbeac773
	I0407 14:25:06.432019    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:06.432019    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:06.432019    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:06.432019    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:06.434687    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:06.434837    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:06.926115    9664 type.go:168] "Request Body" body=""
	I0407 14:25:06.926115    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:06.926115    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:06.926115    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:06.926115    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:06.932252    9664 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:25:06.932396    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:06.932396    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:06.932562    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:06.932562    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:06 GMT
	I0407 14:25:06.932562    9664 round_trippers.go:587]     Audit-Id: 5e962049-1c8f-42ed-8e17-4868bf148aec
	I0407 14:25:06.932562    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:06.932562    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:06.933192    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:07.426273    9664 type.go:168] "Request Body" body=""
	I0407 14:25:07.426273    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:07.426273    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:07.426273    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:07.426273    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:07.430727    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:07.430727    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:07.430727    9664 round_trippers.go:587]     Audit-Id: eb79b29d-7a8a-487f-ba45-e90af29cb7ac
	I0407 14:25:07.430727    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:07.430727    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:07.430727    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:07.430727    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:07.430727    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:07 GMT
	I0407 14:25:07.431266    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:07.926104    9664 type.go:168] "Request Body" body=""
	I0407 14:25:07.926104    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:07.926104    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:07.926104    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:07.926104    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:07.930754    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:07.930754    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:07.930754    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:07.931022    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:07.931022    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:07.931022    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:07.931022    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:07 GMT
	I0407 14:25:07.931022    9664 round_trippers.go:587]     Audit-Id: 350f91d8-1900-4d12-88b7-b7c8b2d1637f
	I0407 14:25:07.931236    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:08.426046    9664 type.go:168] "Request Body" body=""
	I0407 14:25:08.426046    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:08.426046    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:08.426046    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:08.426046    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:08.431522    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:08.431522    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:08.431522    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:08.431522    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:08.431522    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:08.431522    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:08 GMT
	I0407 14:25:08.431522    9664 round_trippers.go:587]     Audit-Id: 99eb50f5-5758-4a84-8a7a-f7b793738a79
	I0407 14:25:08.431522    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:08.431847    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:08.926340    9664 type.go:168] "Request Body" body=""
	I0407 14:25:08.926340    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:08.926340    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:08.926340    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:08.926340    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:08.932711    9664 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:25:08.932711    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:08.932711    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:08.932711    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:08.932711    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:08.932711    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:08.932711    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:08 GMT
	I0407 14:25:08.932711    9664 round_trippers.go:587]     Audit-Id: 2cbdf883-3bfc-4449-aaaf-3cacf6e20358
	I0407 14:25:08.934098    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:08.934221    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:09.426622    9664 type.go:168] "Request Body" body=""
	I0407 14:25:09.426768    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:09.426768    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:09.426768    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:09.426768    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:09.430938    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:09.430938    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:09.430938    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:09 GMT
	I0407 14:25:09.430938    9664 round_trippers.go:587]     Audit-Id: 7b7e9fd6-8c30-475f-adff-6a9bad9f4687
	I0407 14:25:09.430938    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:09.430938    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:09.430938    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:09.430938    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:09.430938    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:09.926506    9664 type.go:168] "Request Body" body=""
	I0407 14:25:09.926506    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:09.926506    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:09.926506    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:09.926506    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:09.932443    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:09.932443    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:09.932443    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:09.932443    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:09 GMT
	I0407 14:25:09.932443    9664 round_trippers.go:587]     Audit-Id: 28e4709f-8ebb-4eff-aa38-9c875c18c07b
	I0407 14:25:09.932443    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:09.932533    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:09.932533    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:09.932723    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:10.426916    9664 type.go:168] "Request Body" body=""
	I0407 14:25:10.426916    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:10.426916    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:10.426916    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:10.426916    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:10.432713    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:10.432713    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:10.432713    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:10.432713    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:10.432713    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:10 GMT
	I0407 14:25:10.432713    9664 round_trippers.go:587]     Audit-Id: c732ccb7-8fcb-4f1b-8f6d-a95b98dceb52
	I0407 14:25:10.432713    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:10.432713    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:10.432713    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:10.926243    9664 type.go:168] "Request Body" body=""
	I0407 14:25:10.926243    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:10.926243    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:10.926243    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:10.926243    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:10.931365    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:10.931908    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:10.931908    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:10.931908    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:10 GMT
	I0407 14:25:10.931908    9664 round_trippers.go:587]     Audit-Id: 5062a3ca-76cf-4e51-98d0-bd101f2321e4
	I0407 14:25:10.931908    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:10.931908    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:10.931908    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:10.932380    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:11.426262    9664 type.go:168] "Request Body" body=""
	I0407 14:25:11.426262    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:11.426262    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:11.426262    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:11.426262    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:11.431314    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:11.431314    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:11.431314    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:11.431314    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:11 GMT
	I0407 14:25:11.431314    9664 round_trippers.go:587]     Audit-Id: 6bc6b6d7-a3df-40c1-89bf-0729b0c1af1f
	I0407 14:25:11.431314    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:11.431314    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:11.431314    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:11.431314    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:11.431314    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:11.927033    9664 type.go:168] "Request Body" body=""
	I0407 14:25:11.927033    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:11.927033    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:11.927033    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:11.927033    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:11.931454    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:11.931454    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:11.931454    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:11.931454    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:11.931454    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:11 GMT
	I0407 14:25:11.931454    9664 round_trippers.go:587]     Audit-Id: 9518686b-9cab-450b-8b3f-4e05c299019a
	I0407 14:25:11.931454    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:11.931454    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:11.931454    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:12.425622    9664 type.go:168] "Request Body" body=""
	I0407 14:25:12.425622    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:12.425622    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:12.425622    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:12.425622    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:12.429716    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:12.429787    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:12.429787    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:12.429787    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:12.429787    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:12.429787    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:12 GMT
	I0407 14:25:12.429787    9664 round_trippers.go:587]     Audit-Id: cf459972-04e3-4753-85e7-6004731394e5
	I0407 14:25:12.429787    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:12.430237    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:12.926113    9664 type.go:168] "Request Body" body=""
	I0407 14:25:12.926113    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:12.926113    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:12.926113    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:12.926113    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:12.937916    9664 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0407 14:25:12.937979    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:12.937979    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:12.937979    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:12.937979    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:12.937979    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:12 GMT
	I0407 14:25:12.937979    9664 round_trippers.go:587]     Audit-Id: 5613e380-f60b-4f5d-a960-8f3b0f725a5b
	I0407 14:25:12.937979    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:12.938414    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:13.426643    9664 type.go:168] "Request Body" body=""
	I0407 14:25:13.426643    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:13.426643    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:13.426643    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:13.426643    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:13.430976    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:13.430976    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:13.430976    9664 round_trippers.go:587]     Audit-Id: f442f1a3-8a2d-4c2e-9c54-156d2761773b
	I0407 14:25:13.430976    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:13.430976    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:13.430976    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:13.430976    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:13.430976    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:13 GMT
	I0407 14:25:13.430976    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:13.431687    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:13.926520    9664 type.go:168] "Request Body" body=""
	I0407 14:25:13.926520    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:13.926520    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:13.926520    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:13.926520    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:13.932457    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:13.932457    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:13.932457    9664 round_trippers.go:587]     Audit-Id: 1af65f19-03cb-4324-8e8c-470e47f5b172
	I0407 14:25:13.932457    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:13.932457    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:13.932457    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:13.932457    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:13.932457    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:13 GMT
	I0407 14:25:13.932792    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:14.425850    9664 type.go:168] "Request Body" body=""
	I0407 14:25:14.426375    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:14.426375    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:14.426375    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:14.426375    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:14.430968    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:14.430968    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:14.430968    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:14.430968    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:14.430968    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:14 GMT
	I0407 14:25:14.430968    9664 round_trippers.go:587]     Audit-Id: b3d6a534-cd3d-4fed-a8ca-a3d3033f8e7d
	I0407 14:25:14.430968    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:14.430968    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:14.430968    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:14.926259    9664 type.go:168] "Request Body" body=""
	I0407 14:25:14.926793    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:14.926920    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:14.926988    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:14.926988    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:14.930947    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:14.931060    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:14.931060    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:14.931060    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:14.931060    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:14.931060    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:14 GMT
	I0407 14:25:14.931060    9664 round_trippers.go:587]     Audit-Id: 05b10e81-50ed-42f9-9298-ef3dac5f01d3
	I0407 14:25:14.931060    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:14.931372    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:15.426365    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.426579    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.426579    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.426579    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.426698    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.430743    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:15.430743    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.430908    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.430908    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.430908    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.430908    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.430908    9664 round_trippers.go:587]     Audit-Id: 59a7bd15-0fb1-4b78-a2b8-e975f02f6e95
	I0407 14:25:15.430908    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.431386    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.431607    9664 node_ready.go:49] node "multinode-140200" has status "Ready":"True"
	I0407 14:25:15.431724    9664 node_ready.go:38] duration metric: took 15.0061683s for node "multinode-140200" to be "Ready" ...
	I0407 14:25:15.431724    9664 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:25:15.431944    9664 type.go:204] "Request Body" body=""
	I0407 14:25:15.431944    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:15.432090    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.432090    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.432090    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.435465    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:15.435465    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.435465    9664 round_trippers.go:587]     Audit-Id: fdec0051-74c1-4c67-b56e-e084b58255a5
	I0407 14:25:15.436320    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.436320    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.436320    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.436320    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.436320    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.440368    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e7 e8 03 0a  0a 0a 00 12 04 32 30 31  |ist..........201|
		00000020  37 1a 00 12 c1 28 0a af  19 0a 18 63 6f 72 65 64  |7....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 32  |-ad5c41ff9a932.2|
		00000090  30 30 39 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |0098.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308089 chars]
	 >
	I0407 14:25:15.440837    9664 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.440837    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.441409    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:25:15.441450    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.441450    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.441450    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.444720    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:15.444720    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.444720    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.444720    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.444720    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.444720    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.444720    9664 round_trippers.go:587]     Audit-Id: 0f160a44-3f16-442c-b309-ca10dd5bfbca
	I0407 14:25:15.444720    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.444720    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c1 28 0a af 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.(.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 04 32 30 30 39 38  |c41ff9a932.20098|
		00000080  00 42 08 08 e6 b4 cf bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 24721 chars]
	 >
	I0407 14:25:15.444720    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.444720    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.444720    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.444720    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.444720    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.447824    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:15.447824    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.447824    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.447824    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.447824    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.447824    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.447824    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.447824    9664 round_trippers.go:587]     Audit-Id: 8e183c6c-e479-40d4-9bfe-8363c086f46d
	I0407 14:25:15.447824    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.447824    9664 pod_ready.go:93] pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.447824    9664 pod_ready.go:82] duration metric: took 6.9868ms for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.447824    9664 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.447824    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.448855    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-140200
	I0407 14:25:15.448947    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.448947    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.448966    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.451206    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.451206    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.451206    9664 round_trippers.go:587]     Audit-Id: ce4ba5ab-b3da-4534-8071-4bc25c2c128e
	I0407 14:25:15.451206    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.451206    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.451206    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.451206    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.451206    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.452563    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  eb 2b 0a 9b 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 31 34  30 32 30 30 12 00 1a 0b  |inode-140200....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 35  |kube-system".*$5|
		00000040  30 65 38 34 63 35 36 2d  35 64 37 38 2d 34 61 35  |0e84c56-5d78-4a5|
		00000050  31 2d 62 64 36 33 2d 34  61 37 32 34 63 63 64 35  |1-bd63-4a724ccd5|
		00000060  66 64 38 32 04 31 39 39  30 38 00 42 08 08 b7 c0  |fd82.19908.B....|
		00000070  cf bf 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4d 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |M.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 26848 chars]
	 >
	I0407 14:25:15.452889    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.452946    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.452946    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.453017    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.453058    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.455502    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.455607    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.455607    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.455607    9664 round_trippers.go:587]     Audit-Id: f5da5501-496f-44c7-a855-35c0827478b3
	I0407 14:25:15.455607    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.455683    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.455683    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.455683    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.455683    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.455683    9664 pod_ready.go:93] pod "etcd-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.455683    9664 pod_ready.go:82] duration metric: took 7.8593ms for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.456308    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.456308    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.456308    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-140200
	I0407 14:25:15.456308    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.456308    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.456308    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.458618    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.459364    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.459364    9664 round_trippers.go:587]     Audit-Id: 867be509-de80-4e9b-8423-997f6b605908
	I0407 14:25:15.459364    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.459364    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.459498    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.459498    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.459498    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.460242    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  9b 35 0a ab 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.5.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 31 34 34 37 35 33 64  |ystem".*$144753d|
		00000050  63 2d 63 36 32 31 2d 34  35 66 37 2d 61 39 34 61  |c-c621-45f7-a94a|
		00000060  2d 38 62 33 38 33 35 65  65 62 62 31 32 32 04 31  |-8b3835eebb122.1|
		00000070  39 38 32 38 00 42 08 08  b7 c0 cf bf 06 10 00 5a  |9828.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 54 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebT.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 32773 chars]
	 >
	I0407 14:25:15.460697    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.460788    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.460788    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.460839    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.460839    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.463330    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.463450    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.463450    9664 round_trippers.go:587]     Audit-Id: e636f6a7-8a11-4ce9-b274-22634c129ca6
	I0407 14:25:15.463450    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.463506    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.463506    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.463506    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.463506    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.464218    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.464436    9664 pod_ready.go:93] pod "kube-apiserver-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.464436    9664 pod_ready.go:82] duration metric: took 8.1279ms for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.464515    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.464650    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.464755    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-140200
	I0407 14:25:15.464755    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.464814    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.464814    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.467183    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.467183    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.467183    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.467183    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.467183    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.467183    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.467183    9664 round_trippers.go:587]     Audit-Id: 8c4828ca-f0fb-4aa4-8da7-a97647cb95a1
	I0407 14:25:15.467183    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.467183    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d5 31 0a 99 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.1....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 31 34 30 32 30 30 12  |ultinode-140200.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 61 37 63 36 65 33  62 62 2d 31 39 37 63 2d  |*$a7c6e3bb-197c-|
		00000060  34 33 34 65 2d 39 66 31  39 2d 37 34 64 37 65 34  |434e-9f19-74d7e4|
		00000070  38 62 35 30 64 65 32 04  31 39 39 33 38 00 42 08  |8b50de2.19938.B.|
		00000080  08 e0 b4 cf bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 30492 chars]
	 >
	I0407 14:25:15.467183    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.467183    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.467183    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.467183    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.467183    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.470037    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.470037    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.470037    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.470037    9664 round_trippers.go:587]     Audit-Id: f09c00b8-345b-4748-b7e4-65aa9b8577e4
	I0407 14:25:15.470037    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.470037    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.470037    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.470037    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.471264    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.471264    9664 pod_ready.go:93] pod "kube-controller-manager-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.471452    9664 pod_ready.go:82] duration metric: took 6.9371ms for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.471452    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.471452    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.626492    9664 request.go:661] Waited for 154.8699ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:25:15.626492    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:25:15.626492    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.626492    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.626492    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.632044    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:15.632108    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.632108    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.632108    9664 round_trippers.go:587]     Audit-Id: 2feefe9b-59fc-42c1-b2b6-c13f94f57eba
	I0407 14:25:15.632108    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.632108    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.632108    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.632167    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.632497    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a0 25 0a be 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 32 72 37 6c 6a 12  0b 6b 75 62 65 2d 70 72  |y-2r7lj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 34 38 39  32 64 37 30 33 2d 66 63  |m".*$4892d703-fc|
		00000050  34 33 2d 34 66 36 37 2d  38 34 39 33 2d 65 61 65  |43-4f67-8493-eae|
		00000060  61 65 38 63 35 65 37 36  35 32 03 36 33 32 38 00  |ae8c5e7652.6328.|
		00000070  42 08 08 a0 b6 cf bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22666 chars]
	 >
	I0407 14:25:15.632663    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.827616    9664 request.go:661] Waited for 194.951ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:25:15.827859    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:25:15.827859    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.827859    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.827859    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.831936    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:15.831936    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Audit-Id: 1bb33d8a-e4b4-44f8-8f48-d5e8e9fdaea4
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.831936    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.831936    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Content-Length: 3463
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.831936    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f0 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 04 31 37 38 32 38 00  |f2f300172.17828.|
		00000060  42 08 08 a0 b6 cf bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16110 chars]
	 >
	I0407 14:25:15.832581    9664 pod_ready.go:93] pod "kube-proxy-2r7lj" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.832581    9664 pod_ready.go:82] duration metric: took 361.1264ms for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.832659    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.832838    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.027190    9664 request.go:661] Waited for 194.2878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:25:16.027190    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:25:16.027190    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.027190    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.027190    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.034868    9664 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:25:16.034868    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.034868    9664 round_trippers.go:587]     Audit-Id: 4133775b-1bd3-42ca-899e-4945d6d50d6d
	I0407 14:25:16.034868    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.034868    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.034933    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.034933    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.034933    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.035216    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  87 26 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 39 72 78 32 64 12  0b 6b 75 62 65 2d 70 72  |y-9rx2d..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 32 65 61  61 62 32 35 64 2d 66 65  |m".*$2eaab25d-fe|
		00000050  30 62 2d 34 63 34 38 2d  61 63 36 62 2d 34 32 30  |0b-4c48-ac6b-420|
		00000060  39 35 66 35 66 62 63 65  36 32 04 31 39 36 35 38  |95f5fbce62.19658|
		00000070  00 42 08 08 e5 b4 cf bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23147 chars]
	 >
	I0407 14:25:16.035505    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.226670    9664 request.go:661] Waited for 191.164ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:16.226670    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:16.226670    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.226670    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.226670    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.231184    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:16.231184    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.231293    9664 round_trippers.go:587]     Audit-Id: 1eddef35-1f56-488b-8773-b8e54f7a9a94
	I0407 14:25:16.231293    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.231293    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.231293    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.231293    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.231293    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.231484    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:16.231484    9664 pod_ready.go:93] pod "kube-proxy-9rx2d" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:16.231484    9664 pod_ready.go:82] duration metric: took 398.8218ms for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:16.231484    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kvg58" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:16.232022    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.428908    9664 request.go:661] Waited for 196.8841ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kvg58
	I0407 14:25:16.428908    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kvg58
	I0407 14:25:16.428908    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.429507    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.429507    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.434571    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:16.434571    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.434571    9664 round_trippers.go:587]     Audit-Id: 8dae401b-da2a-4e6b-aa61-a77b63eace82
	I0407 14:25:16.434571    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.434571    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.434571    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.434571    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.434571    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.435200    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a5 26 0a c2 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 6b 76 67 35 38 12  0b 6b 75 62 65 2d 70 72  |y-kvg58..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 61 38  61 33 33 32 63 2d 62 62  |m".*$ba8a332c-bb|
		00000050  34 61 2d 34 65 39 63 2d  39 61 34 65 2d 32 63 35  |4a-4e9c-9a4e-2c5|
		00000060  37 38 62 64 63 39 39 63  31 32 04 31 38 33 36 38  |78bdc99c12.18368|
		00000070  00 42 08 08 c8 b8 cf bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23303 chars]
	 >
	I0407 14:25:16.435576    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.627305    9664 request.go:661] Waited for 191.7268ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m03
	I0407 14:25:16.627305    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m03
	I0407 14:25:16.627305    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.627305    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.627305    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.632126    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:16.632126    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.632126    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.632126    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.632126    9664 round_trippers.go:587]     Content-Length: 3882
	I0407 14:25:16.632304    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.632304    9664 round_trippers.go:587]     Audit-Id: 61102d7b-050e-4989-8035-0b26574af3a6
	I0407 14:25:16.632304    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.632304    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.632470    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 93 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 33 12 00 1a 00  |e-140200-m03....|
		00000030  22 00 2a 24 64 33 34 31  65 64 66 63 2d 36 33 31  |".*$d341edfc-631|
		00000040  35 2d 34 62 37 62 2d 38  33 30 34 2d 66 39 32 62  |5-4b7b-8304-f92b|
		00000050  63 34 32 31 32 65 39 33  32 04 31 39 35 32 38 00  |c4212e932.19528.|
		00000060  42 08 08 89 be cf bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18167 chars]
	 >
	I0407 14:25:16.632470    9664 pod_ready.go:98] node "multinode-140200-m03" hosting pod "kube-proxy-kvg58" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200-m03" has status "Ready":"Unknown"
	I0407 14:25:16.632470    9664 pod_ready.go:82] duration metric: took 400.9828ms for pod "kube-proxy-kvg58" in "kube-system" namespace to be "Ready" ...
	E0407 14:25:16.632470    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200-m03" hosting pod "kube-proxy-kvg58" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200-m03" has status "Ready":"Unknown"
	I0407 14:25:16.632470    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:16.632470    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.826703    9664 request.go:661] Waited for 194.2321ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:25:16.827192    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:25:16.827192    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.827252    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.827252    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.831464    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:16.831542    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.831542    9664 round_trippers.go:587]     Audit-Id: c0fd04ad-4d96-4e18-a249-2eddce1771e1
	I0407 14:25:16.831542    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.831601    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.831601    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.831601    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.831601    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.831988    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  e0 23 0a 81 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 38 38 64 66 65 65 65  |ystem".*$88dfeee|
		00000050  38 2d 61 33 63 31 2d 34  38 35 62 2d 61 62 66 65  |8-a3c1-485b-abfe|
		00000060  2d 39 65 61 66 30 30 35  37 64 36 63 66 32 04 31  |-9eaf0057d6cf2.1|
		00000070  39 37 35 38 00 42 08 08  e0 b4 cf bf 06 10 00 5a  |9758.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21718 chars]
	 >
	I0407 14:25:16.832343    9664 type.go:168] "Request Body" body=""
	I0407 14:25:17.027209    9664 request.go:661] Waited for 194.8639ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:17.027209    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:17.027209    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.027691    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.027789    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.031445    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:17.031554    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.031610    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.031610    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.031610    9664 round_trippers.go:587]     Audit-Id: 56520e97-af50-468f-b1c2-8d1718e05385
	I0407 14:25:17.031610    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.031610    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.031610    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.032159    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:17.032367    9664 pod_ready.go:93] pod "kube-scheduler-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:17.032434    9664 pod_ready.go:82] duration metric: took 399.8943ms for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:17.032434    9664 pod_ready.go:39] duration metric: took 1.6005899s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:25:17.032546    9664 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:25:17.045912    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:25:17.072888    9664 command_runner.go:130] > 1986
	I0407 14:25:17.072953    9664 api_server.go:72] duration metric: took 16.9589627s to wait for apiserver process to appear ...
	I0407 14:25:17.072953    9664 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:25:17.072953    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:25:17.082197    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 200:
	ok
	I0407 14:25:17.082621    9664 discovery_client.go:658] "Request Body" body=""
	I0407 14:25:17.082694    9664 round_trippers.go:470] GET https://172.17.81.10:8443/version
	I0407 14:25:17.082694    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.082694    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.082694    9664 round_trippers.go:480]     Accept: application/json, */*
	I0407 14:25:17.084383    9664 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 14:25:17.084432    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.084432    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.084432    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.084432    9664 round_trippers.go:587]     Content-Length: 263
	I0407 14:25:17.084432    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.084432    9664 round_trippers.go:587]     Audit-Id: dc307c05-c46b-4d5e-86cf-6c06e60c28c3
	I0407 14:25:17.084486    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.084486    9664 round_trippers.go:587]     Content-Type: application/json
	I0407 14:25:17.084486    9664 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0407 14:25:17.084540    9664 api_server.go:141] control plane version: v1.32.2
	I0407 14:25:17.084600    9664 api_server.go:131] duration metric: took 11.6471ms to wait for apiserver health ...
	I0407 14:25:17.084600    9664 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:25:17.084662    9664 type.go:204] "Request Body" body=""
	I0407 14:25:17.227083    9664 request.go:661] Waited for 142.3652ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:17.227686    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:17.227686    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.227686    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.227686    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.233351    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:17.233351    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.233351    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.233351    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.233351    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.233351    9664 round_trippers.go:587]     Audit-Id: 5922b792-2adc-4725-9b72-24bf3560bfc8
	I0407 14:25:17.233351    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.233351    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.236180    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e7 e8 03 0a  0a 0a 00 12 04 32 30 31  |ist..........201|
		00000020  38 1a 00 12 c1 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 32  |-ad5c41ff9a932.2|
		00000090  30 30 39 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |0098.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308089 chars]
	 >
	I0407 14:25:17.237245    9664 system_pods.go:59] 12 kube-system pods found
	I0407 14:25:17.237349    9664 system_pods.go:61] "coredns-668d6bf9bc-5fp4f" [437226ae-e63d-4245-bbea-ad5c41ff9a93] Running
	I0407 14:25:17.237349    9664 system_pods.go:61] "etcd-multinode-140200" [50e84c56-5d78-4a51-bd63-4a724ccd5fd8] Running
	I0407 14:25:17.237349    9664 system_pods.go:61] "kindnet-pv67r" [5f3d17bc-3df2-48f9-9840-641673243750] Running
	I0407 14:25:17.237349    9664 system_pods.go:61] "kindnet-rnp2q" [e28e853b-b703-4a36-90d2-3af1a37e74e0] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kindnet-zkw9q" [123858da-6f70-4b10-b38e-bd930d21dbe4] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-apiserver-multinode-140200" [144753dc-c621-45f7-a94a-8b3835eebb12] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-controller-manager-multinode-140200" [a7c6e3bb-197c-434e-9f19-74d7e48b50de] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-proxy-2r7lj" [4892d703-fc43-4f67-8493-eaeae8c5e765] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-proxy-9rx2d" [2eaab25d-fe0b-4c48-ac6b-42095f5fbce6] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-proxy-kvg58" [ba8a332c-bb4a-4e9c-9a4e-2c578bdc99c1] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-scheduler-multinode-140200" [88dfeee8-a3c1-485b-abfe-9eaf0057d6cf] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "storage-provisioner" [01df03d8-8816-480c-941b-180069d26997] Running
	I0407 14:25:17.237431    9664 system_pods.go:74] duration metric: took 152.8296ms to wait for pod list to return data ...
	I0407 14:25:17.237431    9664 default_sa.go:34] waiting for default service account to be created ...
	I0407 14:25:17.237573    9664 type.go:204] "Request Body" body=""
	I0407 14:25:17.426716    9664 request.go:661] Waited for 189.1415ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/default/serviceaccounts
	I0407 14:25:17.426993    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/default/serviceaccounts
	I0407 14:25:17.426993    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.426993    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.427189    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.434636    9664 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:25:17.434636    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.434636    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.434636    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Content-Length: 129
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Audit-Id: 6630aae9-ebe9-423b-9ce6-95e466f9ac4d
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.434830    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5d  |iceAccountList.]|
		00000020  0a 0a 0a 00 12 04 32 30  31 38 1a 00 12 4f 0a 4d  |......2018...O.M|
		00000030  0a 07 64 65 66 61 75 6c  74 12 00 1a 07 64 65 66  |..default....def|
		00000040  61 75 6c 74 22 00 2a 24  66 66 31 39 65 66 62 31  |ault".*$ff19efb1|
		00000050  2d 63 35 63 63 2d 34 63  39 30 2d 62 63 36 61 2d  |-c5cc-4c90-bc6a-|
		00000060  31 36 33 38 65 32 62 61  39 39 37 38 32 03 33 33  |1638e2ba99782.33|
		00000070  34 38 00 42 08 08 e5 b4  cf bf 06 10 00 1a 00 22  |48.B..........."|
		00000080  00                                                |.|
	 >
	I0407 14:25:17.434860    9664 default_sa.go:45] found service account: "default"
	I0407 14:25:17.434860    9664 default_sa.go:55] duration metric: took 197.4278ms for default service account to be created ...
	I0407 14:25:17.434860    9664 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 14:25:17.434860    9664 type.go:204] "Request Body" body=""
	I0407 14:25:17.626982    9664 request.go:661] Waited for 192.1202ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:17.627467    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:17.627467    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.627467    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.627467    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.631976    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:17.631976    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.631976    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.631976    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.631976    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.631976    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.631976    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.631976    9664 round_trippers.go:587]     Audit-Id: b54fd5ea-e2b5-4b1c-a1a4-8582f7d41d77
	I0407 14:25:17.634714    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e7 e8 03 0a  0a 0a 00 12 04 32 30 31  |ist..........201|
		00000020  38 1a 00 12 c1 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 32  |-ad5c41ff9a932.2|
		00000090  30 30 39 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |0098.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308089 chars]
	 >
	I0407 14:25:17.635499    9664 system_pods.go:86] 12 kube-system pods found
	I0407 14:25:17.635568    9664 system_pods.go:89] "coredns-668d6bf9bc-5fp4f" [437226ae-e63d-4245-bbea-ad5c41ff9a93] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "etcd-multinode-140200" [50e84c56-5d78-4a51-bd63-4a724ccd5fd8] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kindnet-pv67r" [5f3d17bc-3df2-48f9-9840-641673243750] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kindnet-rnp2q" [e28e853b-b703-4a36-90d2-3af1a37e74e0] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kindnet-zkw9q" [123858da-6f70-4b10-b38e-bd930d21dbe4] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-apiserver-multinode-140200" [144753dc-c621-45f7-a94a-8b3835eebb12] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-controller-manager-multinode-140200" [a7c6e3bb-197c-434e-9f19-74d7e48b50de] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-proxy-2r7lj" [4892d703-fc43-4f67-8493-eaeae8c5e765] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-proxy-9rx2d" [2eaab25d-fe0b-4c48-ac6b-42095f5fbce6] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-proxy-kvg58" [ba8a332c-bb4a-4e9c-9a4e-2c578bdc99c1] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-scheduler-multinode-140200" [88dfeee8-a3c1-485b-abfe-9eaf0057d6cf] Running
	I0407 14:25:17.635735    9664 system_pods.go:89] "storage-provisioner" [01df03d8-8816-480c-941b-180069d26997] Running
	I0407 14:25:17.635735    9664 system_pods.go:126] duration metric: took 200.8738ms to wait for k8s-apps to be running ...
	I0407 14:25:17.635735    9664 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 14:25:17.646094    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:25:17.671388    9664 system_svc.go:56] duration metric: took 35.6528ms WaitForService to wait for kubelet
	I0407 14:25:17.671388    9664 kubeadm.go:582] duration metric: took 17.5573938s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:25:17.671555    9664 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:25:17.671673    9664 type.go:204] "Request Body" body=""
	I0407 14:25:17.826762    9664 request.go:661] Waited for 155.0305ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes
	I0407 14:25:17.826762    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes
	I0407 14:25:17.826762    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.826762    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.826762    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.831193    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:17.831193    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.831273    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.831273    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.831273    9664 round_trippers.go:587]     Audit-Id: 62a35013-42ff-4a6d-a636-cd50e63978db
	I0407 14:25:17.831273    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.831273    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.831273    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.831920    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 ea 5d 0a  0a 0a 00 12 04 32 30 32  |List..]......202|
		00000020  30 1a 00 12 d2 24 0a f8  11 0a 10 6d 75 6c 74 69  |0....$.....multi|
		00000030  6e 6f 64 65 2d 31 34 30  32 30 30 12 00 1a 00 22  |node-140200...."|
		00000040  00 2a 24 31 66 35 33 62  34 63 64 2d 61 62 30 31  |.*$1f53b4cd-ab01|
		00000050  2d 34 32 63 61 2d 61 36  61 36 2d 61 39 33 65 66  |-42ca-a6a6-a93ef|
		00000060  63 39 62 64 34 64 66 32  04 32 30 31 39 38 00 42  |c9bd4df2.20198.B|
		00000070  08 08 dd b4 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 58452 chars]
	 >
	I0407 14:25:17.832282    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:25:17.832381    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:25:17.832430    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:25:17.832430    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:25:17.832430    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:25:17.832430    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:25:17.832430    9664 node_conditions.go:105] duration metric: took 160.8737ms to run NodePressure ...
	I0407 14:25:17.832430    9664 start.go:241] waiting for startup goroutines ...
	I0407 14:25:17.832430    9664 start.go:246] waiting for cluster config update ...
	I0407 14:25:17.832430    9664 start.go:255] writing updated cluster config ...
	I0407 14:25:17.839229    9664 out.go:201] 
	I0407 14:25:17.842142    9664 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:25:17.851140    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:25:17.851770    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:25:17.856976    9664 out.go:177] * Starting "multinode-140200-m02" worker node in "multinode-140200" cluster
	I0407 14:25:17.862270    9664 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 14:25:17.862326    9664 cache.go:56] Caching tarball of preloaded images
	I0407 14:25:17.862326    9664 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 14:25:17.862856    9664 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 14:25:17.863010    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:25:17.865055    9664 start.go:360] acquireMachinesLock for multinode-140200-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:25:17.865055    9664 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-140200-m02"
	I0407 14:25:17.865055    9664 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:25:17.865055    9664 fix.go:54] fixHost starting: m02
	I0407 14:25:17.865645    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:20.028277    9664 main.go:141] libmachine: [stdout =====>] : Off
	
	I0407 14:25:20.028662    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:20.028662    9664 fix.go:112] recreateIfNeeded on multinode-140200-m02: state=Stopped err=<nil>
	W0407 14:25:20.028662    9664 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:25:20.034137    9664 out.go:177] * Restarting existing hyperv VM for "multinode-140200-m02" ...
	I0407 14:25:20.037084    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-140200-m02
	I0407 14:25:23.246002    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:23.246002    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:23.246220    9664 main.go:141] libmachine: Waiting for host to start...
	I0407 14:25:23.246292    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:25.584834    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:25.585187    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:25.585187    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:28.291683    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:28.291683    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:29.291954    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:31.624070    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:31.624667    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:31.624667    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:34.308393    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:34.308393    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:35.308680    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:37.621521    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:37.621595    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:37.621712    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:40.232478    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:40.232478    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:41.232716    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:43.542524    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:43.542524    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:43.542524    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:46.269896    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:46.269896    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:47.270760    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:49.632029    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:49.632029    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:49.632029    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:52.304497    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:25:52.304497    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:52.307323    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:54.500423    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:54.500797    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:54.500896    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:57.114452    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:25:57.114452    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:57.115120    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:25:57.117770    9664 machine.go:93] provisionDockerMachine start ...
	I0407 14:25:57.117862    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:59.317057    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:59.317904    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:59.318043    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:01.927243    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:01.927243    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:01.932898    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:01.933112    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:01.933112    9664 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:26:02.068421    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:26:02.068562    9664 buildroot.go:166] provisioning hostname "multinode-140200-m02"
	I0407 14:26:02.068621    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:04.282526    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:04.283554    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:04.283554    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:06.896641    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:06.896641    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:06.903250    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:06.904263    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:06.904263    9664 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-140200-m02 && echo "multinode-140200-m02" | sudo tee /etc/hostname
	I0407 14:26:07.069029    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-140200-m02
	
	I0407 14:26:07.069135    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:09.269214    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:09.269495    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:09.269627    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:11.913528    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:11.914223    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:11.920702    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:11.921204    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:11.921278    9664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-140200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-140200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-140200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:26:12.080192    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:26:12.080192    9664 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 14:26:12.080192    9664 buildroot.go:174] setting up certificates
	I0407 14:26:12.080192    9664 provision.go:84] configureAuth start
	I0407 14:26:12.080192    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:14.284353    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:14.285375    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:14.285476    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:16.969626    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:16.969626    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:16.970490    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:19.153196    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:19.153382    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:19.153505    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:21.757405    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:21.757549    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:21.757549    9664 provision.go:143] copyHostCerts
	I0407 14:26:21.757810    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 14:26:21.758141    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 14:26:21.758141    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 14:26:21.758310    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 14:26:21.760019    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 14:26:21.760328    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 14:26:21.760328    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 14:26:21.760328    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 14:26:21.761670    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 14:26:21.761770    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 14:26:21.761770    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 14:26:21.762318    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 14:26:21.763423    9664 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-140200-m02 san=[127.0.0.1 172.17.88.68 localhost minikube multinode-140200-m02]
	I0407 14:26:21.947726    9664 provision.go:177] copyRemoteCerts
	I0407 14:26:21.958973    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:26:21.958973    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:24.170550    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:24.170550    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:24.170640    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:26.787352    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:26.787352    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:26.787997    9664 sshutil.go:53] new ssh client: &{IP:172.17.88.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\id_rsa Username:docker}
	I0407 14:26:26.903318    9664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9442152s)
	I0407 14:26:26.903368    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 14:26:26.903961    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:26:26.952672    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 14:26:26.952953    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0407 14:26:26.997094    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 14:26:26.997523    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 14:26:27.048237    9664 provision.go:87] duration metric: took 14.9679314s to configureAuth
	I0407 14:26:27.048237    9664 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:26:27.048903    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:26:27.048903    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:29.234349    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:29.235116    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:29.235188    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:31.857301    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:31.857380    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:31.863793    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:31.864368    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:31.864368    9664 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 14:26:32.005233    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 14:26:32.005233    9664 buildroot.go:70] root file system type: tmpfs
	I0407 14:26:32.005421    9664 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 14:26:32.005421    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:34.270407    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:34.270407    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:34.271415    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:36.873161    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:36.874366    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:36.879873    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:36.880594    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:36.880594    9664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.81.10"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 14:26:37.054659    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.81.10
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 14:26:37.054729    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:39.241067    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:39.241067    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:39.241424    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:41.919824    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:41.919824    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:41.925150    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:41.925798    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:41.925798    9664 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 14:26:44.319046    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 14:26:44.319114    9664 machine.go:96] duration metric: took 47.200985s to provisionDockerMachine
	I0407 14:26:44.319114    9664 start.go:293] postStartSetup for "multinode-140200-m02" (driver="hyperv")
	I0407 14:26:44.319172    9664 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:26:44.330387    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:26:44.330387    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:46.531384    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:46.531384    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:46.531585    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:49.174136    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:49.174800    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:49.174977    9664 sshutil.go:53] new ssh client: &{IP:172.17.88.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\id_rsa Username:docker}
	I0407 14:26:49.293081    9664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9626567s)
	I0407 14:26:49.305350    9664 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:26:49.312168    9664 command_runner.go:130] > NAME=Buildroot
	I0407 14:26:49.312409    9664 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0407 14:26:49.312409    9664 command_runner.go:130] > ID=buildroot
	I0407 14:26:49.312409    9664 command_runner.go:130] > VERSION_ID=2023.02.9
	I0407 14:26:49.312409    9664 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0407 14:26:49.312502    9664 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:26:49.312521    9664 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 14:26:49.312909    9664 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 14:26:49.313859    9664 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 14:26:49.313859    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 14:26:49.324853    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:26:49.343448    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 14:26:49.395188    9664 start.go:296] duration metric: took 5.0760354s for postStartSetup
	I0407 14:26:49.395188    9664 fix.go:56] duration metric: took 1m31.5294377s for fixHost
	I0407 14:26:49.395188    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:51.683947    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:51.684128    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:51.684128    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:54.347385    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:54.347385    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:54.353841    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:54.353841    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:54.354401    9664 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:26:54.498064    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744036014.518678794
	
	I0407 14:26:54.498064    9664 fix.go:216] guest clock: 1744036014.518678794
	I0407 14:26:54.498064    9664 fix.go:229] Guest: 2025-04-07 14:26:54.518678794 +0000 UTC Remote: 2025-04-07 14:26:49.3951885 +0000 UTC m=+255.293815901 (delta=5.123490294s)
	I0407 14:26:54.498064    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state

                                                
                                                
** /stderr **
multinode_test.go:328: failed to run minikube start. args "out/minikube-windows-amd64.exe node list -p multinode-140200" : exit status 1
multinode_test.go:331: (dbg) Run:  out/minikube-windows-amd64.exe node list -p multinode-140200
multinode_test.go:331: (dbg) Non-zero exit: out/minikube-windows-amd64.exe node list -p multinode-140200: context deadline exceeded (0s)
multinode_test.go:333: failed to run node list. args "out/minikube-windows-amd64.exe node list -p multinode-140200" : context deadline exceeded
multinode_test.go:338: reported node list is not the same after restart. Before restart: multinode-140200	172.17.92.89
multinode-140200-m02	172.17.82.40
multinode-140200-m03	172.17.83.62

                                                
                                                
After restart: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-140200 -n multinode-140200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p multinode-140200 -n multinode-140200: (13.4445959s)
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 logs -n 25: (9.4693059s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| Command |                                                           Args                                                           |     Profile      |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	| cp      | multinode-140200 cp testdata\cp-test.txt                                                                                 | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:12 UTC | 07 Apr 25 14:12 UTC |
	|         | multinode-140200-m02:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n                                                                                                  | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:12 UTC | 07 Apr 25 14:12 UTC |
	|         | multinode-140200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-140200 cp multinode-140200-m02:/home/docker/cp-test.txt                                                        | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:12 UTC | 07 Apr 25 14:12 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile1343764680\001\cp-test_multinode-140200-m02.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n                                                                                                  | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:12 UTC | 07 Apr 25 14:12 UTC |
	|         | multinode-140200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-140200 cp multinode-140200-m02:/home/docker/cp-test.txt                                                        | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:12 UTC | 07 Apr 25 14:13 UTC |
	|         | multinode-140200:/home/docker/cp-test_multinode-140200-m02_multinode-140200.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n                                                                                                  | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:13 UTC | 07 Apr 25 14:13 UTC |
	|         | multinode-140200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n multinode-140200 sudo cat                                                                        | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:13 UTC | 07 Apr 25 14:13 UTC |
	|         | /home/docker/cp-test_multinode-140200-m02_multinode-140200.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-140200 cp multinode-140200-m02:/home/docker/cp-test.txt                                                        | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:13 UTC | 07 Apr 25 14:13 UTC |
	|         | multinode-140200-m03:/home/docker/cp-test_multinode-140200-m02_multinode-140200-m03.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n                                                                                                  | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:13 UTC | 07 Apr 25 14:13 UTC |
	|         | multinode-140200-m02 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n multinode-140200-m03 sudo cat                                                                    | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:13 UTC | 07 Apr 25 14:14 UTC |
	|         | /home/docker/cp-test_multinode-140200-m02_multinode-140200-m03.txt                                                       |                  |                   |         |                     |                     |
	| cp      | multinode-140200 cp testdata\cp-test.txt                                                                                 | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | multinode-140200-m03:/home/docker/cp-test.txt                                                                            |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n                                                                                                  | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | multinode-140200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-140200 cp multinode-140200-m03:/home/docker/cp-test.txt                                                        | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile1343764680\001\cp-test_multinode-140200-m03.txt |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n                                                                                                  | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:14 UTC |
	|         | multinode-140200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| cp      | multinode-140200 cp multinode-140200-m03:/home/docker/cp-test.txt                                                        | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:14 UTC | 07 Apr 25 14:15 UTC |
	|         | multinode-140200:/home/docker/cp-test_multinode-140200-m03_multinode-140200.txt                                          |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n                                                                                                  | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | multinode-140200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n multinode-140200 sudo cat                                                                        | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | /home/docker/cp-test_multinode-140200-m03_multinode-140200.txt                                                           |                  |                   |         |                     |                     |
	| cp      | multinode-140200 cp multinode-140200-m03:/home/docker/cp-test.txt                                                        | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | multinode-140200-m02:/home/docker/cp-test_multinode-140200-m03_multinode-140200-m02.txt                                  |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n                                                                                                  | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:15 UTC |
	|         | multinode-140200-m03 sudo cat                                                                                            |                  |                   |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                 |                  |                   |         |                     |                     |
	| ssh     | multinode-140200 ssh -n multinode-140200-m02 sudo cat                                                                    | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:15 UTC | 07 Apr 25 14:16 UTC |
	|         | /home/docker/cp-test_multinode-140200-m03_multinode-140200-m02.txt                                                       |                  |                   |         |                     |                     |
	| node    | multinode-140200 node stop m03                                                                                           | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:16 UTC | 07 Apr 25 14:16 UTC |
	| node    | multinode-140200 node start                                                                                              | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:17 UTC | 07 Apr 25 14:20 UTC |
	|         | m03 -v=7 --alsologtostderr                                                                                               |                  |                   |         |                     |                     |
	| node    | list -p multinode-140200                                                                                                 | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:20 UTC |                     |
	| stop    | -p multinode-140200                                                                                                      | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:20 UTC | 07 Apr 25 14:22 UTC |
	| start   | -p multinode-140200                                                                                                      | multinode-140200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:22 UTC |                     |
	|         | --wait=true -v=8                                                                                                         |                  |                   |         |                     |                     |
	|         | --alsologtostderr                                                                                                        |                  |                   |         |                     |                     |
	|---------|--------------------------------------------------------------------------------------------------------------------------|------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 14:22:34
	Running on machine: minikube3
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 14:22:34.222515    9664 out.go:345] Setting OutFile to fd 1152 ...
	I0407 14:22:34.301515    9664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:22:34.301515    9664 out.go:358] Setting ErrFile to fd 1684...
	I0407 14:22:34.301515    9664 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:22:34.325617    9664 out.go:352] Setting JSON to false
	I0407 14:22:34.334322    9664 start.go:129] hostinfo: {"hostname":"minikube3","uptime":7546,"bootTime":1744028207,"procs":181,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 14:22:34.334322    9664 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 14:22:34.373317    9664 out.go:177] * [multinode-140200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 14:22:34.407137    9664 notify.go:220] Checking for updates...
	I0407 14:22:34.421982    9664 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:22:34.439946    9664 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:22:34.470853    9664 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 14:22:34.501783    9664 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:22:34.526492    9664 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:22:34.536453    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:22:34.536453    9664 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:22:40.846192    9664 out.go:177] * Using the hyperv driver based on existing profile
	I0407 14:22:40.851486    9664 start.go:297] selected driver: hyperv
	I0407 14:22:40.851486    9664 start.go:901] validating driver "hyperv" against &{Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 Cluste
rName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.92.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.83.62 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false i
nspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirr
or: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:22:40.851592    9664 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:22:40.912713    9664 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:22:40.912713    9664 cni.go:84] Creating CNI manager for ""
	I0407 14:22:40.912713    9664 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0407 14:22:40.913758    9664 start.go:340] cluster config:
	{Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeC
A APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.92.89 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.83.62 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubef
low:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath
: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:22:40.913758    9664 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:22:41.006896    9664 out.go:177] * Starting "multinode-140200" primary control-plane node in "multinode-140200" cluster
	I0407 14:22:41.015176    9664 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 14:22:41.015717    9664 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 14:22:41.016012    9664 cache.go:56] Caching tarball of preloaded images
	I0407 14:22:41.016076    9664 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 14:22:41.016609    9664 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 14:22:41.016990    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:22:41.019639    9664 start.go:360] acquireMachinesLock for multinode-140200: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:22:41.019639    9664 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-140200"
	I0407 14:22:41.020168    9664 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:22:41.020285    9664 fix.go:54] fixHost starting: 
	I0407 14:22:41.021118    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:22:44.157467    9664 main.go:141] libmachine: [stdout =====>] : Off
	
	I0407 14:22:44.157467    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:44.157467    9664 fix.go:112] recreateIfNeeded on multinode-140200: state=Stopped err=<nil>
	W0407 14:22:44.157467    9664 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:22:44.161662    9664 out.go:177] * Restarting existing hyperv VM for "multinode-140200" ...
	I0407 14:22:44.164309    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-140200
	I0407 14:22:47.584814    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:22:47.584883    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:47.584883    9664 main.go:141] libmachine: Waiting for host to start...
	I0407 14:22:47.584883    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:22:50.090728    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:22:50.090728    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:50.090968    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:22:52.947643    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:22:52.947643    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:53.948723    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:22:56.402945    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:22:56.403684    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:22:56.403684    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:22:59.228555    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:22:59.228555    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:00.229043    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:02.679251    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:02.679251    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:02.679251    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:05.532691    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:23:05.533502    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:06.533929    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:09.017316    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:09.017316    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:09.018369    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:11.832237    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:23:11.832454    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:12.833439    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:15.354793    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:15.354828    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:15.354907    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:18.312925    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:18.313136    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:18.316683    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:20.747224    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:20.747224    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:20.747608    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:23.705630    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:23.705681    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:23.705681    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:23:23.708663    9664 machine.go:93] provisionDockerMachine start ...
	I0407 14:23:23.708663    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:26.171802    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:26.172588    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:26.172755    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:29.098404    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:29.098492    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:29.103912    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:23:29.104615    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:23:29.105212    9664 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:23:29.254993    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:23:29.254993    9664 buildroot.go:166] provisioning hostname "multinode-140200"
	I0407 14:23:29.254993    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:31.728559    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:31.728559    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:31.729015    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:34.677540    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:34.677540    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:34.684323    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:23:34.685086    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:23:34.685086    9664 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-140200 && echo "multinode-140200" | sudo tee /etc/hostname
	I0407 14:23:34.852285    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-140200
	
	I0407 14:23:34.852285    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:37.198114    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:37.199067    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:37.199180    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:39.930183    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:39.930183    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:39.938416    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:23:39.938996    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:23:39.938996    9664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-140200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-140200/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-140200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:23:40.093560    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:23:40.093560    9664 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 14:23:40.093560    9664 buildroot.go:174] setting up certificates
	I0407 14:23:40.093560    9664 provision.go:84] configureAuth start
	I0407 14:23:40.093560    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:42.265582    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:42.265665    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:42.265754    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:44.855485    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:44.855722    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:44.855835    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:47.062130    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:47.062905    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:47.062905    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:49.772804    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:49.772804    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:49.772804    9664 provision.go:143] copyHostCerts
	I0407 14:23:49.773810    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 14:23:49.774200    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 14:23:49.774320    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 14:23:49.774449    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 14:23:49.775926    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 14:23:49.776527    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 14:23:49.776690    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 14:23:49.777186    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 14:23:49.778050    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 14:23:49.778050    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 14:23:49.778050    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 14:23:49.778753    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 14:23:49.780026    9664 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-140200 san=[127.0.0.1 172.17.81.10 localhost minikube multinode-140200]
	I0407 14:23:50.115484    9664 provision.go:177] copyRemoteCerts
	I0407 14:23:50.128003    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:23:50.128174    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:52.315874    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:52.316091    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:52.316091    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:23:54.995282    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:23:54.995282    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:54.996035    9664 sshutil.go:53] new ssh client: &{IP:172.17.81.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:23:55.100399    9664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9721873s)
	I0407 14:23:55.100534    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 14:23:55.100940    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:23:55.151480    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 14:23:55.152091    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1216 bytes)
	I0407 14:23:55.203267    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 14:23:55.203383    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 14:23:55.254672    9664 provision.go:87] duration metric: took 15.1609973s to configureAuth
	I0407 14:23:55.254780    9664 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:23:55.255570    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:23:55.255753    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:23:57.521423    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:23:57.521423    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:23:57.521423    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:00.230828    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:00.232072    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:00.238451    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:00.238762    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:00.239294    9664 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 14:24:00.381497    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 14:24:00.381497    9664 buildroot.go:70] root file system type: tmpfs
	I0407 14:24:00.382032    9664 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 14:24:00.382075    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:02.567246    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:02.567246    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:02.567613    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:05.149330    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:05.149330    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:05.154945    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:05.155505    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:05.155505    9664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 14:24:05.324789    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 14:24:05.324789    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:07.539138    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:07.539404    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:07.539404    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:10.149946    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:10.149946    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:10.155051    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:10.155841    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:10.155841    9664 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 14:24:12.663888    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 14:24:12.663888    9664 machine.go:96] duration metric: took 48.9548517s to provisionDockerMachine
	I0407 14:24:12.663888    9664 start.go:293] postStartSetup for "multinode-140200" (driver="hyperv")
	I0407 14:24:12.663888    9664 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:24:12.675945    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:24:12.675945    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:14.946945    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:14.948000    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:14.948086    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:17.750016    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:17.750016    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:17.751180    9664 sshutil.go:53] new ssh client: &{IP:172.17.81.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:24:17.858791    9664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1828059s)
	I0407 14:24:17.871936    9664 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:24:17.880520    9664 command_runner.go:130] > NAME=Buildroot
	I0407 14:24:17.880520    9664 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0407 14:24:17.880581    9664 command_runner.go:130] > ID=buildroot
	I0407 14:24:17.880581    9664 command_runner.go:130] > VERSION_ID=2023.02.9
	I0407 14:24:17.880581    9664 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0407 14:24:17.880706    9664 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:24:17.880729    9664 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 14:24:17.881243    9664 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 14:24:17.882291    9664 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 14:24:17.882291    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 14:24:17.895371    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:24:17.919401    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 14:24:17.970939    9664 start.go:296] duration metric: took 5.3070107s for postStartSetup
	I0407 14:24:17.970939    9664 fix.go:56] duration metric: took 1m36.9499156s for fixHost
	I0407 14:24:17.970939    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:20.302777    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:20.302777    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:20.303438    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:22.951613    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:22.951613    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:22.958366    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:22.958951    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:22.958951    9664 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:24:23.091704    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744035863.111759143
	
	I0407 14:24:23.091704    9664 fix.go:216] guest clock: 1744035863.111759143
	I0407 14:24:23.091704    9664 fix.go:229] Guest: 2025-04-07 14:24:23.111759143 +0000 UTC Remote: 2025-04-07 14:24:17.9709393 +0000 UTC m=+103.870720701 (delta=5.140819843s)
	I0407 14:24:23.091704    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:25.402754    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:25.402754    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:25.403633    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:28.187294    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:28.187391    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:28.192560    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:24:28.193328    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.81.10 22 <nil> <nil>}
	I0407 14:24:28.193328    9664 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744035863
	I0407 14:24:28.344101    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 14:24:23 UTC 2025
	
	I0407 14:24:28.344188    9664 fix.go:236] clock set: Mon Apr  7 14:24:23 UTC 2025
	 (err=<nil>)
	I0407 14:24:28.344188    9664 start.go:83] releasing machines lock for "multinode-140200", held for 1m47.3237311s
	I0407 14:24:28.344458    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:30.559722    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:30.559722    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:30.560108    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:33.190065    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:33.190065    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:33.196802    9664 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 14:24:33.196802    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:33.204915    9664 ssh_runner.go:195] Run: cat /version.json
	I0407 14:24:33.204915    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:24:35.476677    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:35.476677    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:35.476677    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:35.484852    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:24:35.484852    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:35.484852    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:24:38.291981    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:38.291981    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:38.291981    9664 sshutil.go:53] new ssh client: &{IP:172.17.81.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:24:38.316113    9664 main.go:141] libmachine: [stdout =====>] : 172.17.81.10
	
	I0407 14:24:38.316946    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:24:38.317008    9664 sshutil.go:53] new ssh client: &{IP:172.17.81.10 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:24:38.394211    9664 command_runner.go:130] ! bash: line 1: curl.exe: command not found
	I0407 14:24:38.394708    9664 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.197866s)
	W0407 14:24:38.394822    9664 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 14:24:38.412784    9664 command_runner.go:130] > {"iso_version": "v1.35.0", "kicbase_version": "v0.0.45-1736763277-20236", "minikube_version": "v1.35.0", "commit": "3fb24bd87c8c8761e2515e1a9ee13835a389ed68"}
	I0407 14:24:38.412839    9664 ssh_runner.go:235] Completed: cat /version.json: (5.2078836s)
	I0407 14:24:38.424508    9664 ssh_runner.go:195] Run: systemctl --version
	I0407 14:24:38.433457    9664 command_runner.go:130] > systemd 252 (252)
	I0407 14:24:38.433499    9664 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0407 14:24:38.444729    9664 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0407 14:24:38.453499    9664 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0407 14:24:38.453499    9664 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:24:38.464968    9664 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0407 14:24:38.497536    9664 command_runner.go:139] > /etc/cni/net.d/87-podman-bridge.conflist, 
	I0407 14:24:38.497654    9664 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 14:24:38.497654    9664 start.go:495] detecting cgroup driver to use...
	I0407 14:24:38.497654    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0407 14:24:38.499828    9664 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 14:24:38.499828    9664 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 14:24:38.534270    9664 command_runner.go:130] > runtime-endpoint: unix:///run/containerd/containerd.sock
	I0407 14:24:38.546104    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 14:24:38.579011    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 14:24:38.600306    9664 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 14:24:38.613869    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 14:24:38.642635    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 14:24:38.672814    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 14:24:38.704357    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 14:24:38.734850    9664 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:24:38.765012    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 14:24:38.794529    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 14:24:38.826005    9664 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 14:24:38.853773    9664 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:24:38.870623    9664 command_runner.go:130] ! sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:24:38.870979    9664 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:24:38.883895    9664 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 14:24:38.916681    9664 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:24:38.942042    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:39.149380    9664 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 14:24:39.183027    9664 start.go:495] detecting cgroup driver to use...
	I0407 14:24:39.194328    9664 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 14:24:39.222670    9664 command_runner.go:130] > # /usr/lib/systemd/system/docker.service
	I0407 14:24:39.222670    9664 command_runner.go:130] > [Unit]
	I0407 14:24:39.222764    9664 command_runner.go:130] > Description=Docker Application Container Engine
	I0407 14:24:39.222764    9664 command_runner.go:130] > Documentation=https://docs.docker.com
	I0407 14:24:39.222764    9664 command_runner.go:130] > After=network.target  minikube-automount.service docker.socket
	I0407 14:24:39.222840    9664 command_runner.go:130] > Requires= minikube-automount.service docker.socket 
	I0407 14:24:39.222840    9664 command_runner.go:130] > StartLimitBurst=3
	I0407 14:24:39.222840    9664 command_runner.go:130] > StartLimitIntervalSec=60
	I0407 14:24:39.222840    9664 command_runner.go:130] > [Service]
	I0407 14:24:39.222902    9664 command_runner.go:130] > Type=notify
	I0407 14:24:39.222902    9664 command_runner.go:130] > Restart=on-failure
	I0407 14:24:39.222902    9664 command_runner.go:130] > # This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	I0407 14:24:39.223009    9664 command_runner.go:130] > # The base configuration already specifies an 'ExecStart=...' command. The first directive
	I0407 14:24:39.223009    9664 command_runner.go:130] > # here is to clear out that command inherited from the base configuration. Without this,
	I0407 14:24:39.223009    9664 command_runner.go:130] > # the command from the base configuration and the command specified here are treated as
	I0407 14:24:39.223009    9664 command_runner.go:130] > # a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	I0407 14:24:39.223009    9664 command_runner.go:130] > # will catch this invalid input and refuse to start the service with an error like:
	I0407 14:24:39.223009    9664 command_runner.go:130] > #  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	I0407 14:24:39.223233    9664 command_runner.go:130] > # NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	I0407 14:24:39.223233    9664 command_runner.go:130] > # container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	I0407 14:24:39.223301    9664 command_runner.go:130] > ExecStart=
	I0407 14:24:39.223341    9664 command_runner.go:130] > ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	I0407 14:24:39.223341    9664 command_runner.go:130] > ExecReload=/bin/kill -s HUP $MAINPID
	I0407 14:24:39.223341    9664 command_runner.go:130] > # Having non-zero Limit*s causes performance problems due to accounting overhead
	I0407 14:24:39.223441    9664 command_runner.go:130] > # in the kernel. We recommend using cgroups to do container-local accounting.
	I0407 14:24:39.223441    9664 command_runner.go:130] > LimitNOFILE=infinity
	I0407 14:24:39.223441    9664 command_runner.go:130] > LimitNPROC=infinity
	I0407 14:24:39.223441    9664 command_runner.go:130] > LimitCORE=infinity
	I0407 14:24:39.223441    9664 command_runner.go:130] > # Uncomment TasksMax if your systemd version supports it.
	I0407 14:24:39.223441    9664 command_runner.go:130] > # Only systemd 226 and above support this version.
	I0407 14:24:39.223441    9664 command_runner.go:130] > TasksMax=infinity
	I0407 14:24:39.223441    9664 command_runner.go:130] > TimeoutStartSec=0
	I0407 14:24:39.223562    9664 command_runner.go:130] > # set delegate yes so that systemd does not reset the cgroups of docker containers
	I0407 14:24:39.223562    9664 command_runner.go:130] > Delegate=yes
	I0407 14:24:39.223692    9664 command_runner.go:130] > # kill only the docker process, not all processes in the cgroup
	I0407 14:24:39.223692    9664 command_runner.go:130] > KillMode=process
	I0407 14:24:39.223692    9664 command_runner.go:130] > [Install]
	I0407 14:24:39.223692    9664 command_runner.go:130] > WantedBy=multi-user.target
	I0407 14:24:39.239764    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:24:39.272337    9664 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:24:39.318378    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:24:39.354724    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 14:24:39.388275    9664 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 14:24:39.453914    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 14:24:39.479650    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:24:39.512303    9664 command_runner.go:130] > runtime-endpoint: unix:///var/run/cri-dockerd.sock
	I0407 14:24:39.525980    9664 ssh_runner.go:195] Run: which cri-dockerd
	I0407 14:24:39.532680    9664 command_runner.go:130] > /usr/bin/cri-dockerd
	I0407 14:24:39.544531    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 14:24:39.579586    9664 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 14:24:39.620673    9664 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 14:24:39.818163    9664 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 14:24:40.008711    9664 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 14:24:40.009015    9664 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 14:24:40.058666    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:40.263139    9664 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 14:24:42.979685    9664 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.7165255s)
	I0407 14:24:42.991974    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 14:24:43.027377    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 14:24:43.062407    9664 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 14:24:43.255872    9664 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 14:24:43.453774    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:43.648518    9664 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 14:24:43.686304    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 14:24:43.719797    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:43.911950    9664 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 14:24:44.013028    9664 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 14:24:44.024339    9664 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 14:24:44.032339    9664 command_runner.go:130] >   File: /var/run/cri-dockerd.sock
	I0407 14:24:44.032339    9664 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0407 14:24:44.033170    9664 command_runner.go:130] > Device: 0,22	Inode: 852         Links: 1
	I0407 14:24:44.033170    9664 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: ( 1000/  docker)
	I0407 14:24:44.033170    9664 command_runner.go:130] > Access: 2025-04-07 14:24:43.956784007 +0000
	I0407 14:24:44.033170    9664 command_runner.go:130] > Modify: 2025-04-07 14:24:43.956784007 +0000
	I0407 14:24:44.033170    9664 command_runner.go:130] > Change: 2025-04-07 14:24:43.960784030 +0000
	I0407 14:24:44.033170    9664 command_runner.go:130] >  Birth: -
	I0407 14:24:44.033510    9664 start.go:563] Will wait 60s for crictl version
	I0407 14:24:44.045742    9664 ssh_runner.go:195] Run: which crictl
	I0407 14:24:44.051857    9664 command_runner.go:130] > /usr/bin/crictl
	I0407 14:24:44.063467    9664 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 14:24:44.118378    9664 command_runner.go:130] > Version:  0.1.0
	I0407 14:24:44.118480    9664 command_runner.go:130] > RuntimeName:  docker
	I0407 14:24:44.118480    9664 command_runner.go:130] > RuntimeVersion:  27.4.0
	I0407 14:24:44.118480    9664 command_runner.go:130] > RuntimeApiVersion:  v1
	I0407 14:24:44.118661    9664 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 14:24:44.127363    9664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 14:24:44.165417    9664 command_runner.go:130] > 27.4.0
	I0407 14:24:44.176494    9664 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 14:24:44.213583    9664 command_runner.go:130] > 27.4.0
	I0407 14:24:44.218574    9664 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 14:24:44.218574    9664 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 14:24:44.222577    9664 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 14:24:44.222577    9664 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 14:24:44.222577    9664 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 14:24:44.223576    9664 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 14:24:44.225580    9664 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 14:24:44.225580    9664 ip.go:214] interface addr: 172.17.80.1/20
	I0407 14:24:44.236594    9664 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 14:24:44.242639    9664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "172.17.80.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:24:44.263857    9664 kubeadm.go:883] updating cluster {Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-1
40200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.81.10 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.83.62 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-ga
dget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: Disable
Optimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 14:24:44.263857    9664 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 14:24:44.273242    9664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 14:24:44.300803    9664 command_runner.go:130] > kindest/kindnetd:v20250214-acbabc1a
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0407 14:24:44.300803    9664 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0407 14:24:44.300803    9664 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:24:44.300803    9664 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0407 14:24:44.300803    9664 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0407 14:24:44.300803    9664 docker.go:619] Images already preloaded, skipping extraction
	I0407 14:24:44.310785    9664 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 14:24:44.338977    9664 command_runner.go:130] > kindest/kindnetd:v20250214-acbabc1a
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/kube-apiserver:v1.32.2
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/kube-controller-manager:v1.32.2
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/kube-proxy:v1.32.2
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/kube-scheduler:v1.32.2
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/etcd:3.5.16-0
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/coredns/coredns:v1.11.3
	I0407 14:24:44.339095    9664 command_runner.go:130] > registry.k8s.io/pause:3.10
	I0407 14:24:44.339095    9664 command_runner.go:130] > gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 14:24:44.339095    9664 command_runner.go:130] > gcr.io/k8s-minikube/busybox:1.28
	I0407 14:24:44.339095    9664 docker.go:689] Got preloaded images: -- stdout --
	kindest/kindnetd:v20250214-acbabc1a
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0407 14:24:44.339095    9664 cache_images.go:84] Images are preloaded, skipping loading
	I0407 14:24:44.339095    9664 kubeadm.go:934] updating node { 172.17.81.10 8443 v1.32.2 docker true true} ...
	I0407 14:24:44.339663    9664 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-140200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.81.10
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:multinode-140200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 14:24:44.348043    9664 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 14:24:44.417290    9664 command_runner.go:130] > cgroupfs
	I0407 14:24:44.417353    9664 cni.go:84] Creating CNI manager for ""
	I0407 14:24:44.417353    9664 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0407 14:24:44.417353    9664 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 14:24:44.417353    9664 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.81.10 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-140200 NodeName:multinode-140200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.81.10"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.81.10 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc
/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 14:24:44.417353    9664 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.81.10
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "multinode-140200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.17.81.10"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.81.10"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 14:24:44.428685    9664 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 14:24:44.447042    9664 command_runner.go:130] > kubeadm
	I0407 14:24:44.447042    9664 command_runner.go:130] > kubectl
	I0407 14:24:44.447042    9664 command_runner.go:130] > kubelet
	I0407 14:24:44.447042    9664 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 14:24:44.457777    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 14:24:44.475470    9664 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0407 14:24:44.509591    9664 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 14:24:44.541732    9664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2294 bytes)
	I0407 14:24:44.587264    9664 ssh_runner.go:195] Run: grep 172.17.81.10	control-plane.minikube.internal$ /etc/hosts
	I0407 14:24:44.593595    9664 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "172.17.81.10	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0407 14:24:44.627624    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:24:44.819743    9664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:24:44.849015    9664 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200 for IP: 172.17.81.10
	I0407 14:24:44.849015    9664 certs.go:194] generating shared ca certs ...
	I0407 14:24:44.849015    9664 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:24:44.850041    9664 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 14:24:44.850514    9664 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 14:24:44.850514    9664 certs.go:256] generating profile certs ...
	I0407 14:24:44.851273    9664 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\client.key
	I0407 14:24:44.851273    9664 certs.go:363] generating signed profile cert for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.90c83a59
	I0407 14:24:44.851273    9664 crypto.go:68] Generating cert C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.90c83a59 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 172.17.81.10]
	I0407 14:24:45.630073    9664 crypto.go:156] Writing cert to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.90c83a59 ...
	I0407 14:24:45.630073    9664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.90c83a59: {Name:mkf42bc21a237f89ddbd6add9d917623f245de4d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:24:45.631035    9664 crypto.go:164] Writing key to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.90c83a59 ...
	I0407 14:24:45.631035    9664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.90c83a59: {Name:mk2bb99af4db552e24dfaf61165a338e38686628 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:24:45.633036    9664 certs.go:381] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt.90c83a59 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt
	I0407 14:24:45.649054    9664 certs.go:385] copying C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key.90c83a59 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key
	I0407 14:24:45.650053    9664 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key
	I0407 14:24:45.650053    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /var/lib/minikube/certs/ca.crt
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key -> /var/lib/minikube/certs/ca.key
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0407 14:24:45.651014    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0407 14:24:45.652017    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 14:24:45.652017    9664 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 14:24:45.652017    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 14:24:45.653036    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 14:24:45.653036    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 14:24:45.653036    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 14:24:45.654055    9664 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 14:24:45.654055    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem -> /usr/share/ca-certificates/7728.pem
	I0407 14:24:45.654055    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /usr/share/ca-certificates/77282.pem
	I0407 14:24:45.654055    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:45.655051    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 14:24:45.703834    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 14:24:45.750128    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 14:24:45.795646    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 14:24:45.839915    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0407 14:24:45.894262    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0407 14:24:45.937910    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 14:24:45.983793    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 14:24:46.037317    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 14:24:46.085937    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 14:24:46.136560    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 14:24:46.183619    9664 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 14:24:46.229947    9664 ssh_runner.go:195] Run: openssl version
	I0407 14:24:46.237848    9664 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0407 14:24:46.249845    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 14:24:46.281889    9664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 14:24:46.290380    9664 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 14:24:46.290471    9664 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 14:24:46.302987    9664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 14:24:46.312890    9664 command_runner.go:130] > 3ec20f2e
	I0407 14:24:46.326256    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 14:24:46.359033    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 14:24:46.389880    9664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:46.398519    9664 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:46.398605    9664 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:46.410491    9664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 14:24:46.420146    9664 command_runner.go:130] > b5213941
	I0407 14:24:46.431162    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 14:24:46.461651    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 14:24:46.492323    9664 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 14:24:46.498263    9664 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 14:24:46.498620    9664 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 14:24:46.510533    9664 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 14:24:46.519544    9664 command_runner.go:130] > 51391683
	I0407 14:24:46.531465    9664 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 14:24:46.568502    9664 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:24:46.576497    9664 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 14:24:46.576613    9664 command_runner.go:130] >   Size: 1176      	Blocks: 8          IO Block: 4096   regular file
	I0407 14:24:46.576613    9664 command_runner.go:130] > Device: 8,1	Inode: 7336801     Links: 1
	I0407 14:24:46.576613    9664 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0407 14:24:46.576672    9664 command_runner.go:130] > Access: 2025-04-07 13:59:48.055920369 +0000
	I0407 14:24:46.576672    9664 command_runner.go:130] > Modify: 2025-04-07 13:59:48.055920369 +0000
	I0407 14:24:46.576672    9664 command_runner.go:130] > Change: 2025-04-07 13:59:48.055920369 +0000
	I0407 14:24:46.576672    9664 command_runner.go:130] >  Birth: 2025-04-07 13:59:48.055920369 +0000
	I0407 14:24:46.588420    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 14:24:46.598132    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.609613    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 14:24:46.621307    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.633205    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 14:24:46.644280    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.655810    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 14:24:46.665305    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.678515    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 14:24:46.687622    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.702911    9664 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 14:24:46.711879    9664 command_runner.go:130] > Certificate will not expire
	I0407 14:24:46.711879    9664 kubeadm.go:392] StartCluster: {Name:multinode-140200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:multinode-1402
00 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.81.10 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:172.17.82.40 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true} {Name:m03 IP:172.17.83.62 Port:0 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadge
t:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 14:24:46.724383    9664 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 14:24:46.760994    9664 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 14:24:46.782638    9664 command_runner.go:130] > /var/lib/kubelet/config.yaml
	I0407 14:24:46.782638    9664 command_runner.go:130] > /var/lib/kubelet/kubeadm-flags.env
	I0407 14:24:46.782638    9664 command_runner.go:130] > /var/lib/minikube/etcd:
	I0407 14:24:46.782638    9664 command_runner.go:130] > member
	I0407 14:24:46.782638    9664 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 14:24:46.782638    9664 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 14:24:46.793657    9664 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 14:24:46.810195    9664 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 14:24:46.811497    9664 kubeconfig.go:47] verify endpoint returned: get endpoint: "multinode-140200" does not appear in C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:24:46.812203    9664 kubeconfig.go:62] C:\Users\jenkins.minikube3\minikube-integration\kubeconfig needs updating (will repair): [kubeconfig missing "multinode-140200" cluster setting kubeconfig missing "multinode-140200" context setting]
	I0407 14:24:46.813123    9664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:24:46.832112    9664 loader.go:402] Config loaded from file:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:24:46.833121    9664 kapi.go:59] client config for multinode-140200: &rest.Config{Host:"https://172.17.81.10:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200/client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\multinode-140200/client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:
[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 14:24:46.835119    9664 cert_rotation.go:140] Starting client certificate rotation controller
	I0407 14:24:46.835119    9664 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0407 14:24:46.835119    9664 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0407 14:24:46.835119    9664 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0407 14:24:46.835119    9664 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0407 14:24:46.845125    9664 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 14:24:46.862731    9664 command_runner.go:130] > --- /var/tmp/minikube/kubeadm.yaml
	I0407 14:24:46.862731    9664 command_runner.go:130] > +++ /var/tmp/minikube/kubeadm.yaml.new
	I0407 14:24:46.862731    9664 command_runner.go:130] > @@ -1,7 +1,7 @@
	I0407 14:24:46.862731    9664 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0407 14:24:46.862731    9664 command_runner.go:130] >  kind: InitConfiguration
	I0407 14:24:46.862731    9664 command_runner.go:130] >  localAPIEndpoint:
	I0407 14:24:46.862731    9664 command_runner.go:130] > -  advertiseAddress: 172.17.92.89
	I0407 14:24:46.862731    9664 command_runner.go:130] > +  advertiseAddress: 172.17.81.10
	I0407 14:24:46.862731    9664 command_runner.go:130] >    bindPort: 8443
	I0407 14:24:46.862731    9664 command_runner.go:130] >  bootstrapTokens:
	I0407 14:24:46.862731    9664 command_runner.go:130] >    - groups:
	I0407 14:24:46.862731    9664 command_runner.go:130] > @@ -15,13 +15,13 @@
	I0407 14:24:46.862731    9664 command_runner.go:130] >    name: "multinode-140200"
	I0407 14:24:46.862731    9664 command_runner.go:130] >    kubeletExtraArgs:
	I0407 14:24:46.862731    9664 command_runner.go:130] >      - name: "node-ip"
	I0407 14:24:46.862731    9664 command_runner.go:130] > -      value: "172.17.92.89"
	I0407 14:24:46.862731    9664 command_runner.go:130] > +      value: "172.17.81.10"
	I0407 14:24:46.862731    9664 command_runner.go:130] >    taints: []
	I0407 14:24:46.862731    9664 command_runner.go:130] >  ---
	I0407 14:24:46.862731    9664 command_runner.go:130] >  apiVersion: kubeadm.k8s.io/v1beta4
	I0407 14:24:46.862731    9664 command_runner.go:130] >  kind: ClusterConfiguration
	I0407 14:24:46.863112    9664 command_runner.go:130] >  apiServer:
	I0407 14:24:46.863112    9664 command_runner.go:130] > -  certSANs: ["127.0.0.1", "localhost", "172.17.92.89"]
	I0407 14:24:46.863112    9664 command_runner.go:130] > +  certSANs: ["127.0.0.1", "localhost", "172.17.81.10"]
	I0407 14:24:46.863112    9664 command_runner.go:130] >    extraArgs:
	I0407 14:24:46.863112    9664 command_runner.go:130] >      - name: "enable-admission-plugins"
	I0407 14:24:46.863112    9664 command_runner.go:130] >        value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	I0407 14:24:46.863112    9664 kubeadm.go:640] detected kubeadm config drift (will reconfigure cluster from new /var/tmp/minikube/kubeadm.yaml):
	-- stdout --
	--- /var/tmp/minikube/kubeadm.yaml
	+++ /var/tmp/minikube/kubeadm.yaml.new
	@@ -1,7 +1,7 @@
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: InitConfiguration
	 localAPIEndpoint:
	-  advertiseAddress: 172.17.92.89
	+  advertiseAddress: 172.17.81.10
	   bindPort: 8443
	 bootstrapTokens:
	   - groups:
	@@ -15,13 +15,13 @@
	   name: "multinode-140200"
	   kubeletExtraArgs:
	     - name: "node-ip"
	-      value: "172.17.92.89"
	+      value: "172.17.81.10"
	   taints: []
	 ---
	 apiVersion: kubeadm.k8s.io/v1beta4
	 kind: ClusterConfiguration
	 apiServer:
	-  certSANs: ["127.0.0.1", "localhost", "172.17.92.89"]
	+  certSANs: ["127.0.0.1", "localhost", "172.17.81.10"]
	   extraArgs:
	     - name: "enable-admission-plugins"
	       value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	
	-- /stdout --
	I0407 14:24:46.863112    9664 kubeadm.go:1160] stopping kube-system containers ...
	I0407 14:24:46.872123    9664 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 14:24:46.902172    9664 command_runner.go:130] > b2d29d6fc774
	I0407 14:24:46.902481    9664 command_runner.go:130] > 1e0d3f9a0f21
	I0407 14:24:46.902481    9664 command_runner.go:130] > 47eb0b16ce1d
	I0407 14:24:46.902481    9664 command_runner.go:130] > f6c740bfe5bb
	I0407 14:24:46.902481    9664 command_runner.go:130] > 2a1208136f15
	I0407 14:24:46.902481    9664 command_runner.go:130] > ec26042b5271
	I0407 14:24:46.902481    9664 command_runner.go:130] > 0d317e51cbf8
	I0407 14:24:46.902481    9664 command_runner.go:130] > 728d07c29084
	I0407 14:24:46.902481    9664 command_runner.go:130] > 8c615c7e0506
	I0407 14:24:46.902578    9664 command_runner.go:130] > 159f6e03fef6
	I0407 14:24:46.902578    9664 command_runner.go:130] > 783fd069538d
	I0407 14:24:46.902578    9664 command_runner.go:130] > 92c49129b5b0
	I0407 14:24:46.902611    9664 command_runner.go:130] > 50c1342f8214
	I0407 14:24:46.902611    9664 command_runner.go:130] > d7cc03773793
	I0407 14:24:46.902611    9664 command_runner.go:130] > 8bd2f8fc3a28
	I0407 14:24:46.902611    9664 command_runner.go:130] > ad64d975eb39
	I0407 14:24:46.902664    9664 docker.go:483] Stopping containers: [b2d29d6fc774 1e0d3f9a0f21 47eb0b16ce1d f6c740bfe5bb 2a1208136f15 ec26042b5271 0d317e51cbf8 728d07c29084 8c615c7e0506 159f6e03fef6 783fd069538d 92c49129b5b0 50c1342f8214 d7cc03773793 8bd2f8fc3a28 ad64d975eb39]
	I0407 14:24:46.910589    9664 ssh_runner.go:195] Run: docker stop b2d29d6fc774 1e0d3f9a0f21 47eb0b16ce1d f6c740bfe5bb 2a1208136f15 ec26042b5271 0d317e51cbf8 728d07c29084 8c615c7e0506 159f6e03fef6 783fd069538d 92c49129b5b0 50c1342f8214 d7cc03773793 8bd2f8fc3a28 ad64d975eb39
	I0407 14:24:46.940247    9664 command_runner.go:130] > b2d29d6fc774
	I0407 14:24:46.940247    9664 command_runner.go:130] > 1e0d3f9a0f21
	I0407 14:24:46.940247    9664 command_runner.go:130] > 47eb0b16ce1d
	I0407 14:24:46.940247    9664 command_runner.go:130] > f6c740bfe5bb
	I0407 14:24:46.940247    9664 command_runner.go:130] > 2a1208136f15
	I0407 14:24:46.940247    9664 command_runner.go:130] > ec26042b5271
	I0407 14:24:46.940247    9664 command_runner.go:130] > 0d317e51cbf8
	I0407 14:24:46.940247    9664 command_runner.go:130] > 728d07c29084
	I0407 14:24:46.940247    9664 command_runner.go:130] > 8c615c7e0506
	I0407 14:24:46.940247    9664 command_runner.go:130] > 159f6e03fef6
	I0407 14:24:46.940247    9664 command_runner.go:130] > 783fd069538d
	I0407 14:24:46.940247    9664 command_runner.go:130] > 92c49129b5b0
	I0407 14:24:46.940247    9664 command_runner.go:130] > 50c1342f8214
	I0407 14:24:46.940247    9664 command_runner.go:130] > d7cc03773793
	I0407 14:24:46.940247    9664 command_runner.go:130] > 8bd2f8fc3a28
	I0407 14:24:46.940247    9664 command_runner.go:130] > ad64d975eb39
	I0407 14:24:46.951475    9664 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 14:24:46.988707    9664 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 14:24:47.005616    9664 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	I0407 14:24:47.005677    9664 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	I0407 14:24:47.005677    9664 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	I0407 14:24:47.005677    9664 command_runner.go:130] ! ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:24:47.005677    9664 kubeadm.go:155] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0407 14:24:47.005677    9664 kubeadm.go:157] found existing configuration files:
	
	I0407 14:24:47.015905    9664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 14:24:47.032555    9664 command_runner.go:130] ! grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:24:47.033685    9664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0407 14:24:47.046729    9664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0407 14:24:47.076609    9664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 14:24:47.093621    9664 command_runner.go:130] ! grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:24:47.094645    9664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0407 14:24:47.105621    9664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0407 14:24:47.135971    9664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 14:24:47.156962    9664 command_runner.go:130] ! grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:24:47.156962    9664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0407 14:24:47.172736    9664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 14:24:47.204155    9664 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 14:24:47.222763    9664 command_runner.go:130] ! grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:24:47.223059    9664 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0407 14:24:47.233762    9664 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 14:24:47.265209    9664 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 14:24:47.285673    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:47.587507    9664 command_runner.go:130] > [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing ca certificate authority
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing apiserver certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing front-proxy-ca certificate authority
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing front-proxy-client certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing etcd/ca certificate authority
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing etcd/server certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing etcd/peer certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0407 14:24:47.587611    9664 command_runner.go:130] > [certs] Using the existing "sa" key
	I0407 14:24:47.587829    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:48.874076    9664 command_runner.go:130] > [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 14:24:48.874152    9664 command_runner.go:130] > [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 14:24:48.874257    9664 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.2863132s)
	I0407 14:24:48.874257    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:49.185460    9664 command_runner.go:130] > [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 14:24:49.185460    9664 command_runner.go:130] > [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 14:24:49.185460    9664 command_runner.go:130] > [kubelet-start] Starting the kubelet
	I0407 14:24:49.185579    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:49.273614    9664 command_runner.go:130] > [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 14:24:49.273700    9664 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 14:24:49.273789    9664 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 14:24:49.273789    9664 command_runner.go:130] > [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 14:24:49.273865    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:49.361824    9664 command_runner.go:130] > [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 14:24:49.361824    9664 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:24:49.371833    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:49.873791    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:50.375336    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:50.877729    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:51.376820    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:24:51.405603    9664 command_runner.go:130] > 1986
	I0407 14:24:51.405603    9664 api_server.go:72] duration metric: took 2.0437631s to wait for apiserver process to appear ...
	I0407 14:24:51.405603    9664 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:24:51.405603    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:54.307929    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:24:54.307929    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:24:54.307929    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:54.395698    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 14:24:54.395698    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 14:24:54.406476    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:54.494240    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:24:54.494240    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[-]poststarthook/start-service-ip-repair-controllers failed: reason withheld
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:24:54.906650    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:54.915631    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:24:54.915688    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:24:55.406297    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:55.423321    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:24:55.423321    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:24:55.906973    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:55.917283    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 14:24:55.917283    9664 api_server.go:103] status: https://172.17.81.10:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 14:24:56.407008    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:24:56.415527    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 200:
	ok
	I0407 14:24:56.415527    9664 discovery_client.go:658] "Request Body" body=""
	I0407 14:24:56.415527    9664 round_trippers.go:470] GET https://172.17.81.10:8443/version
	I0407 14:24:56.415527    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:56.415527    9664 round_trippers.go:480]     Accept: application/json, */*
	I0407 14:24:56.415527    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:56.431760    9664 round_trippers.go:581] Response Status: 200 OK in 16 milliseconds
	I0407 14:24:56.431836    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:56 GMT
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Audit-Id: e9888871-2152-473f-84fc-74747ca3c545
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Content-Type: application/json
	I0407 14:24:56.431836    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:56.431836    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:56.431836    9664 round_trippers.go:587]     Content-Length: 263
	I0407 14:24:56.431836    9664 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0407 14:24:56.431836    9664 api_server.go:141] control plane version: v1.32.2
	I0407 14:24:56.431836    9664 api_server.go:131] duration metric: took 5.0261939s to wait for apiserver health ...
	I0407 14:24:56.431836    9664 cni.go:84] Creating CNI manager for ""
	I0407 14:24:56.431836    9664 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0407 14:24:56.435498    9664 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0407 14:24:56.449498    9664 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0407 14:24:56.458913    9664 command_runner.go:130] >   File: /opt/cni/bin/portmap
	I0407 14:24:56.459004    9664 command_runner.go:130] >   Size: 3103192   	Blocks: 6064       IO Block: 4096   regular file
	I0407 14:24:56.459004    9664 command_runner.go:130] > Device: 0,17	Inode: 3500        Links: 1
	I0407 14:24:56.459004    9664 command_runner.go:130] > Access: (0755/-rwxr-xr-x)  Uid: (    0/    root)   Gid: (    0/    root)
	I0407 14:24:56.459004    9664 command_runner.go:130] > Access: 2025-04-07 14:23:14.562263100 +0000
	I0407 14:24:56.459116    9664 command_runner.go:130] > Modify: 2025-01-14 09:03:58.000000000 +0000
	I0407 14:24:56.459116    9664 command_runner.go:130] > Change: 2025-04-07 14:23:05.746000000 +0000
	I0407 14:24:56.459116    9664 command_runner.go:130] >  Birth: -
	I0407 14:24:56.459222    9664 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.32.2/kubectl ...
	I0407 14:24:56.459288    9664 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2601 bytes)
	I0407 14:24:56.520006    9664 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0407 14:24:57.879996    9664 command_runner.go:130] > clusterrole.rbac.authorization.k8s.io/kindnet unchanged
	I0407 14:24:57.880066    9664 command_runner.go:130] > clusterrolebinding.rbac.authorization.k8s.io/kindnet unchanged
	I0407 14:24:57.880066    9664 command_runner.go:130] > serviceaccount/kindnet unchanged
	I0407 14:24:57.880066    9664 command_runner.go:130] > daemonset.apps/kindnet configured
	I0407 14:24:57.880066    9664 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.32.2/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.3600489s)
	I0407 14:24:57.880132    9664 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:24:57.880344    9664 type.go:204] "Request Body" body=""
	I0407 14:24:57.880480    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:24:57.880480    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:57.880480    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:57.880480    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:57.888444    9664 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:24:57.888444    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:57.888444    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:57.888444    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:57.888444    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:57 GMT
	I0407 14:24:57.888444    9664 round_trippers.go:587]     Audit-Id: 22d378a1-fcfc-425a-8ac5-a9c2887ed740
	I0407 14:24:57.888444    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:57.888444    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:57.891432    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 80 f2 03 0a  0a 0a 00 12 04 31 39 35  |ist..........195|
		00000020  32 1a 00 12 80 29 0a 99  19 0a 18 63 6f 72 65 64  |2....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 31  |-ad5c41ff9a932.1|
		00000090  39 32 32 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |9228.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 313865 chars]
	 >
	I0407 14:24:57.893438    9664 system_pods.go:59] 12 kube-system pods found
	I0407 14:24:57.893438    9664 system_pods.go:61] "coredns-668d6bf9bc-5fp4f" [437226ae-e63d-4245-bbea-ad5c41ff9a93] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 14:24:57.894472    9664 system_pods.go:61] "etcd-multinode-140200" [50e84c56-5d78-4a51-bd63-4a724ccd5fd8] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kindnet-pv67r" [5f3d17bc-3df2-48f9-9840-641673243750] Running
	I0407 14:24:57.894472    9664 system_pods.go:61] "kindnet-rnp2q" [e28e853b-b703-4a36-90d2-3af1a37e74e0] Running
	I0407 14:24:57.894472    9664 system_pods.go:61] "kindnet-zkw9q" [123858da-6f70-4b10-b38e-bd930d21dbe4] Running / Ready:ContainersNotReady (containers with unready status: [kindnet-cni]) / ContainersReady:ContainersNotReady (containers with unready status: [kindnet-cni])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-apiserver-multinode-140200" [144753dc-c621-45f7-a94a-8b3835eebb12] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-controller-manager-multinode-140200" [a7c6e3bb-197c-434e-9f19-74d7e48b50de] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-proxy-2r7lj" [4892d703-fc43-4f67-8493-eaeae8c5e765] Running
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-proxy-9rx2d" [2eaab25d-fe0b-4c48-ac6b-42095f5fbce6] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-proxy-kvg58" [ba8a332c-bb4a-4e9c-9a4e-2c578bdc99c1] Running
	I0407 14:24:57.894472    9664 system_pods.go:61] "kube-scheduler-multinode-140200" [88dfeee8-a3c1-485b-abfe-9eaf0057d6cf] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 14:24:57.894472    9664 system_pods.go:61] "storage-provisioner" [01df03d8-8816-480c-941b-180069d26997] Running
	I0407 14:24:57.894472    9664 system_pods.go:74] duration metric: took 14.3397ms to wait for pod list to return data ...
	I0407 14:24:57.894472    9664 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:24:57.895425    9664 type.go:204] "Request Body" body=""
	I0407 14:24:57.895425    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes
	I0407 14:24:57.895425    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:57.895425    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:57.895425    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:57.912638    9664 round_trippers.go:581] Response Status: 200 OK in 17 milliseconds
	I0407 14:24:57.912638    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:57.912882    9664 round_trippers.go:587]     Audit-Id: daa46545-5d5a-4a1c-9beb-7435a82319f5
	I0407 14:24:57.912882    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:57.912882    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:57.912882    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:57.912882    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:57.912882    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:57 GMT
	I0407 14:24:57.913087    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 b1 5f 0a  0a 0a 00 12 04 31 39 35  |List.._......195|
		00000020  33 1a 00 12 99 26 0a 86  12 0a 10 6d 75 6c 74 69  |3....&.....multi|
		00000030  6e 6f 64 65 2d 31 34 30  32 30 30 12 00 1a 00 22  |node-140200...."|
		00000040  00 2a 24 31 66 35 33 62  34 63 64 2d 61 62 30 31  |.*$1f53b4cd-ab01|
		00000050  2d 34 32 63 61 2d 61 36  61 36 2d 61 39 33 65 66  |-42ca-a6a6-a93ef|
		00000060  63 39 62 64 34 64 66 32  04 31 39 34 39 38 00 42  |c9bd4df2.19498.B|
		00000070  08 08 dd b4 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 59407 chars]
	 >
	I0407 14:24:57.913785    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:24:57.913850    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:24:57.913913    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:24:57.913913    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:24:57.913913    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:24:57.913913    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:24:57.913913    9664 node_conditions.go:105] duration metric: took 19.4413ms to run NodePressure ...
	I0407 14:24:57.913913    9664 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 14:24:58.482931    9664 command_runner.go:130] > [addons] Applied essential addon: CoreDNS
	I0407 14:24:58.483053    9664 command_runner.go:130] > [addons] Applied essential addon: kube-proxy
	I0407 14:24:58.483131    9664 kubeadm.go:724] waiting for restarted kubelet to initialise ...
	I0407 14:24:58.483338    9664 type.go:204] "Request Body" body=""
	I0407 14:24:58.483536    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods?labelSelector=tier%3Dcontrol-plane
	I0407 14:24:58.483590    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.483590    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.483590    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.488617    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:58.488617    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.488617    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.488617    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.488679    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.488679    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.488679    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.488679    9664 round_trippers.go:587]     Audit-Id: 5ccbbc91-0502-4dcc-aeb1-90a4d3b77a42
	I0407 14:24:58.489892    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 d1 bc 01 0a  0a 0a 00 12 04 31 39 36  |ist..........196|
		00000020  39 1a 00 12 97 2d 0a d5  1a 0a 15 65 74 63 64 2d  |9....-.....etcd-|
		00000030  6d 75 6c 74 69 6e 6f 64  65 2d 31 34 30 32 30 30  |multinode-140200|
		00000040  12 00 1a 0b 6b 75 62 65  2d 73 79 73 74 65 6d 22  |....kube-system"|
		00000050  00 2a 24 35 30 65 38 34  63 35 36 2d 35 64 37 38  |.*$50e84c56-5d78|
		00000060  2d 34 61 35 31 2d 62 64  36 33 2d 34 61 37 32 34  |-4a51-bd63-4a724|
		00000070  63 63 64 35 66 64 38 32  04 31 39 31 32 38 00 42  |ccd5fd82.19128.B|
		00000080  08 08 b7 c0 cf bf 06 10  00 5a 11 0a 09 63 6f 6d  |.........Z...com|
		00000090  70 6f 6e 65 6e 74 12 04  65 74 63 64 5a 15 0a 04  |ponent..etcdZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 4d 0a 30 6b  75 62 65 61 64 6d 2e 6b  |anebM.0kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 65 74 63  |ubernetes.io/et [truncated 118341 chars]
	 >
	I0407 14:24:58.490300    9664 kubeadm.go:739] kubelet initialised
	I0407 14:24:58.490378    9664 kubeadm.go:740] duration metric: took 7.2469ms waiting for restarted kubelet to initialise ...
	I0407 14:24:58.490405    9664 pod_ready.go:36] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:24:58.490405    9664 type.go:204] "Request Body" body=""
	I0407 14:24:58.490405    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:24:58.490405    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.490405    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.490405    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.495727    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:24:58.495768    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.495768    9664 round_trippers.go:587]     Audit-Id: a0d36550-e66c-4556-b8be-272016a1b460
	I0407 14:24:58.495768    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.495768    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.495768    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.495768    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.495768    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.498361    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 a4 ef 03 0a  0a 0a 00 12 04 31 39 36  |ist..........196|
		00000020  39 1a 00 12 80 29 0a 99  19 0a 18 63 6f 72 65 64  |9....).....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 31  |-ad5c41ff9a932.1|
		00000090  39 32 32 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |9228.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 312131 chars]
	 >
	I0407 14:24:58.498361    9664 pod_ready.go:79] waiting up to 4m0s for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.498361    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.498361    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:24:58.498361    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.499363    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.499363    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.501361    9664 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 14:24:58.501361    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.501361    9664 round_trippers.go:587]     Audit-Id: 71f32bdb-8717-42a6-a203-adfaf9de1617
	I0407 14:24:58.501361    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.501361    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.501361    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.501361    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.501361    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.502383    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  80 29 0a 99 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.).....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 04 31 39 32 32 38  |c41ff9a932.19228|
		00000080  00 42 08 08 e6 b4 cf bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 25036 chars]
	 >
	I0407 14:24:58.502383    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.502383    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:58.502383    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.502383    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.502383    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.504366    9664 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 14:24:58.505371    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.505371    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.505371    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.505371    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.505371    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.505426    9664 round_trippers.go:587]     Audit-Id: bbe0d86b-3fd6-4195-afba-946e63c7d275
	I0407 14:24:58.505426    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.505426    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:58.505426    9664 pod_ready.go:98] node "multinode-140200" hosting pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.505426    9664 pod_ready.go:82] duration metric: took 7.0645ms for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:58.505426    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.505426    9664 pod_ready.go:79] waiting up to 4m0s for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.505426    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.505991    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-140200
	I0407 14:24:58.505991    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.505991    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.505991    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.507666    9664 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 14:24:58.507666    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.507666    9664 round_trippers.go:587]     Audit-Id: 1a4b46d3-b9e2-4f73-9ab1-78a131a71898
	I0407 14:24:58.507666    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.507666    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.507666    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.507666    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.507666    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.508661    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  97 2d 0a d5 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.-.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 31 34  30 32 30 30 12 00 1a 0b  |inode-140200....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 35  |kube-system".*$5|
		00000040  30 65 38 34 63 35 36 2d  35 64 37 38 2d 34 61 35  |0e84c56-5d78-4a5|
		00000050  31 2d 62 64 36 33 2d 34  61 37 32 34 63 63 64 35  |1-bd63-4a724ccd5|
		00000060  66 64 38 32 04 31 39 31  32 38 00 42 08 08 b7 c0  |fd82.19128.B....|
		00000070  cf bf 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4d 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |M.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 27650 chars]
	 >
	I0407 14:24:58.508661    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.508661    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:58.508661    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.508661    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.508661    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.518658    9664 round_trippers.go:581] Response Status: 200 OK in 9 milliseconds
	I0407 14:24:58.518658    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.518658    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.518658    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.518658    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.518658    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.518658    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.518658    9664 round_trippers.go:587]     Audit-Id: 3548c9a5-cb6e-4cef-a1ad-881a049000f1
	I0407 14:24:58.519659    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:58.519659    9664 pod_ready.go:98] node "multinode-140200" hosting pod "etcd-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.519659    9664 pod_ready.go:82] duration metric: took 14.2335ms for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:58.519659    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "etcd-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.519659    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.519659    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.519659    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-140200
	I0407 14:24:58.519659    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.519659    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.519659    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.522754    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:24:58.522820    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.522820    9664 round_trippers.go:587]     Audit-Id: decdf752-1ebc-442e-ac3f-4715af91eeb2
	I0407 14:24:58.522820    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.522820    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.522820    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.522820    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.522820    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.523253    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  db 36 0a e5 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.6.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 31 34 34 37 35 33 64  |ystem".*$144753d|
		00000050  63 2d 63 36 32 31 2d 34  35 66 37 2d 61 39 34 61  |c-c621-45f7-a94a|
		00000060  2d 38 62 33 38 33 35 65  65 62 62 31 32 32 04 31  |-8b3835eebb122.1|
		00000070  39 33 38 38 00 42 08 08  b7 c0 cf bf 06 10 00 5a  |9388.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 54 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebT.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 33721 chars]
	 >
	I0407 14:24:58.523253    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.523253    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:58.523253    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.523253    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.523253    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.525641    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:24:58.525730    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.525730    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.525730    9664 round_trippers.go:587]     Audit-Id: 889d9163-3962-495e-91f0-1fc48bda4632
	I0407 14:24:58.525730    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.525730    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.525730    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.525730    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.525730    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:58.526313    9664 pod_ready.go:98] node "multinode-140200" hosting pod "kube-apiserver-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.526313    9664 pod_ready.go:82] duration metric: took 6.6532ms for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:58.526313    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "kube-apiserver-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.526313    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.526313    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.526313    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-140200
	I0407 14:24:58.526313    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.526313    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.526313    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.529335    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:24:58.529335    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.529335    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.529335    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.529335    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.529335    9664 round_trippers.go:587]     Audit-Id: 8847b9ad-fc10-44e9-bc10-cd29fa5ace23
	I0407 14:24:58.529400    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.529400    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.529400    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a7 33 0a d3 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.3....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 31 34 30 32 30 30 12  |ultinode-140200.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 61 37 63 36 65 33  62 62 2d 31 39 37 63 2d  |*$a7c6e3bb-197c-|
		00000060  34 33 34 65 2d 39 66 31  39 2d 37 34 64 37 65 34  |434e-9f19-74d7e4|
		00000070  38 62 35 30 64 65 32 04  31 39 31 36 38 00 42 08  |8b50de2.19168.B.|
		00000080  08 e0 b4 cf bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 31521 chars]
	 >
	I0407 14:24:58.529929    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.529980    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:58.529980    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.529980    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.529980    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.532273    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:24:58.532273    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.532273    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.532273    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.532273    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.532273    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.532273    9664 round_trippers.go:587]     Audit-Id: f2b41d93-47a7-47de-bd48-133fc3874d24
	I0407 14:24:58.532273    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.532368    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:58.532368    9664 pod_ready.go:98] node "multinode-140200" hosting pod "kube-controller-manager-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.532368    9664 pod_ready.go:82] duration metric: took 6.0549ms for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:58.532368    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "kube-controller-manager-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:58.532368    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.532904    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.683517    9664 request.go:661] Waited for 150.6121ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:24:58.683979    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:24:58.683979    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.683979    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.683979    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.687869    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:24:58.687977    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.687977    9664 round_trippers.go:587]     Audit-Id: c51293cf-a4e0-411a-b86d-7c21c848493b
	I0407 14:24:58.687977    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.687977    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.687977    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.687977    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.687977    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.688387    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a0 25 0a be 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 32 72 37 6c 6a 12  0b 6b 75 62 65 2d 70 72  |y-2r7lj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 34 38 39  32 64 37 30 33 2d 66 63  |m".*$4892d703-fc|
		00000050  34 33 2d 34 66 36 37 2d  38 34 39 33 2d 65 61 65  |43-4f67-8493-eae|
		00000060  61 65 38 63 35 65 37 36  35 32 03 36 33 32 38 00  |ae8c5e7652.6328.|
		00000070  42 08 08 a0 b6 cf bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22666 chars]
	 >
	I0407 14:24:58.688645    9664 type.go:168] "Request Body" body=""
	I0407 14:24:58.883647    9664 request.go:661] Waited for 195.001ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:24:58.884127    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:24:58.884127    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:58.884127    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:58.884127    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:58.887541    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:24:58.887541    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Content-Length: 3463
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:58 GMT
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Audit-Id: 5b7d7e48-20ed-4708-a742-9d906f8ce484
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:58.887633    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:58.887633    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:58.887633    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:58.887869    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f0 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 04 31 37 38 32 38 00  |f2f300172.17828.|
		00000060  42 08 08 a0 b6 cf bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16110 chars]
	 >
	I0407 14:24:58.888103    9664 pod_ready.go:93] pod "kube-proxy-2r7lj" in "kube-system" namespace has status "Ready":"True"
	I0407 14:24:58.888103    9664 pod_ready.go:82] duration metric: took 355.7322ms for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.888179    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:58.888314    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.083862    9664 request.go:661] Waited for 195.5228ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:24:59.083862    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:24:59.084288    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.084288    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.084288    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.088499    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:59.088582    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.088582    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.088582    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.088582    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.088582    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.088582    9664 round_trippers.go:587]     Audit-Id: b5ebf68f-ef34-4168-bcae-f7306cc68792
	I0407 14:24:59.088582    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.088750    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  87 26 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 39 72 78 32 64 12  0b 6b 75 62 65 2d 70 72  |y-9rx2d..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 32 65 61  61 62 32 35 64 2d 66 65  |m".*$2eaab25d-fe|
		00000050  30 62 2d 34 63 34 38 2d  61 63 36 62 2d 34 32 30  |0b-4c48-ac6b-420|
		00000060  39 35 66 35 66 62 63 65  36 32 04 31 39 36 35 38  |95f5fbce62.19658|
		00000070  00 42 08 08 e5 b4 cf bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23147 chars]
	 >
	I0407 14:24:59.089279    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.284200    9664 request.go:661] Waited for 194.9195ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:59.284913    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:24:59.284913    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.284913    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.284913    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.291290    9664 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:24:59.291401    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.291401    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.291401    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.291401    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.291401    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.291401    9664 round_trippers.go:587]     Audit-Id: 4912de46-39a7-4e90-b37f-100477e9d131
	I0407 14:24:59.291401    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.291401    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:24:59.292119    9664 pod_ready.go:98] node "multinode-140200" hosting pod "kube-proxy-9rx2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:59.292119    9664 pod_ready.go:82] duration metric: took 403.937ms for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:59.292119    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "kube-proxy-9rx2d" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:24:59.292119    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-proxy-kvg58" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:59.292230    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.484482    9664 request.go:661] Waited for 192.2501ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kvg58
	I0407 14:24:59.484482    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kvg58
	I0407 14:24:59.484482    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.484482    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.484482    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.489200    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:59.489200    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.489272    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.489272    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.489272    9664 round_trippers.go:587]     Audit-Id: 3b0fee65-1040-4e7e-9097-0d8ea8407b2b
	I0407 14:24:59.489272    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.489272    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.489272    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.489625    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a5 26 0a c2 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 6b 76 67 35 38 12  0b 6b 75 62 65 2d 70 72  |y-kvg58..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 61 38  61 33 33 32 63 2d 62 62  |m".*$ba8a332c-bb|
		00000050  34 61 2d 34 65 39 63 2d  39 61 34 65 2d 32 63 35  |4a-4e9c-9a4e-2c5|
		00000060  37 38 62 64 63 39 39 63  31 32 04 31 38 33 36 38  |78bdc99c12.18368|
		00000070  00 42 08 08 c8 b8 cf bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23303 chars]
	 >
	I0407 14:24:59.490005    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.684378    9664 request.go:661] Waited for 194.3711ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m03
	I0407 14:24:59.684378    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m03
	I0407 14:24:59.684378    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.684378    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.684378    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.689353    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:59.689353    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.689353    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.689353    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Content-Length: 3882
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Audit-Id: 8d63d21a-8709-4d3a-bb59-9321d0f8c2d0
	I0407 14:24:59.689353    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.689881    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 93 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 33 12 00 1a 00  |e-140200-m03....|
		00000030  22 00 2a 24 64 33 34 31  65 64 66 63 2d 36 33 31  |".*$d341edfc-631|
		00000040  35 2d 34 62 37 62 2d 38  33 30 34 2d 66 39 32 62  |5-4b7b-8304-f92b|
		00000050  63 34 32 31 32 65 39 33  32 04 31 39 35 32 38 00  |c4212e932.19528.|
		00000060  42 08 08 89 be cf bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18167 chars]
	 >
	I0407 14:24:59.690143    9664 pod_ready.go:98] node "multinode-140200-m03" hosting pod "kube-proxy-kvg58" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200-m03" has status "Ready":"Unknown"
	I0407 14:24:59.690143    9664 pod_ready.go:82] duration metric: took 398.021ms for pod "kube-proxy-kvg58" in "kube-system" namespace to be "Ready" ...
	E0407 14:24:59.690143    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200-m03" hosting pod "kube-proxy-kvg58" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200-m03" has status "Ready":"Unknown"
	I0407 14:24:59.690143    9664 pod_ready.go:79] waiting up to 4m0s for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:24:59.690143    9664 type.go:168] "Request Body" body=""
	I0407 14:24:59.884710    9664 request.go:661] Waited for 194.5656ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:24:59.885265    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:24:59.885454    9664 round_trippers.go:476] Request Headers:
	I0407 14:24:59.885454    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:24:59.885454    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:24:59.890245    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:24:59.890332    9664 round_trippers.go:584] Response Headers:
	I0407 14:24:59.890332    9664 round_trippers.go:587]     Audit-Id: 9eb4185b-6080-422f-bfc0-0a6476ac1505
	I0407 14:24:59.890332    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:24:59.890394    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:24:59.890394    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:24:59.890394    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:24:59.890394    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:24:59 GMT
	I0407 14:24:59.890713    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a0 25 0a bb 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.%.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 38 38 64 66 65 65 65  |ystem".*$88dfeee|
		00000050  38 2d 61 33 63 31 2d 34  38 35 62 2d 61 62 66 65  |8-a3c1-485b-abfe|
		00000060  2d 39 65 61 66 30 30 35  37 64 36 63 66 32 04 31  |-9eaf0057d6cf2.1|
		00000070  39 30 38 38 00 42 08 08  e0 b4 cf bf 06 10 00 5a  |9088.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 22666 chars]
	 >
	I0407 14:24:59.890996    9664 type.go:168] "Request Body" body=""
	I0407 14:25:00.083879    9664 request.go:661] Waited for 192.8823ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:00.084495    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:00.084495    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:00.084495    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:00.084495    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:00.089561    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:00.089561    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:00.089561    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:00 GMT
	I0407 14:25:00.089561    9664 round_trippers.go:587]     Audit-Id: 5a1fc21a-6f71-450c-95a0-cc1a6067d95d
	I0407 14:25:00.089561    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:00.089561    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:00.089561    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:00.089561    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:00.089942    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:00.090176    9664 pod_ready.go:98] node "multinode-140200" hosting pod "kube-scheduler-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:25:00.090176    9664 pod_ready.go:82] duration metric: took 400.0294ms for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	E0407 14:25:00.090176    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200" hosting pod "kube-scheduler-multinode-140200" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200" has status "Ready":"False"
	I0407 14:25:00.090176    9664 pod_ready.go:39] duration metric: took 1.599758s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:25:00.090176    9664 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 14:25:00.110151    9664 command_runner.go:130] > -16
	I0407 14:25:00.110151    9664 ops.go:34] apiserver oom_adj: -16
	I0407 14:25:00.110315    9664 kubeadm.go:597] duration metric: took 13.3275744s to restartPrimaryControlPlane
	I0407 14:25:00.110315    9664 kubeadm.go:394] duration metric: took 13.3983331s to StartCluster
	I0407 14:25:00.110315    9664 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:25:00.110387    9664 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:25:00.112220    9664 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:25:00.113757    9664 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.81.10 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 14:25:00.113757    9664 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 14:25:00.114273    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:25:00.118071    9664 out.go:177] * Verifying Kubernetes components...
	I0407 14:25:00.120740    9664 out.go:177] * Enabled addons: 
	I0407 14:25:00.126386    9664 addons.go:514] duration metric: took 12.6283ms for enable addons: enabled=[]
	I0407 14:25:00.134265    9664 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:25:00.400679    9664 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 14:25:00.425441    9664 node_ready.go:35] waiting up to 6m0s for node "multinode-140200" to be "Ready" ...
	I0407 14:25:00.425441    9664 type.go:168] "Request Body" body=""
	I0407 14:25:00.425441    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:00.425441    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:00.425441    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:00.425441    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:00.430472    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:00.430472    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:00.430472    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:00 GMT
	I0407 14:25:00.430472    9664 round_trippers.go:587]     Audit-Id: aabdb8a9-d716-4491-930f-2f847139840c
	I0407 14:25:00.430472    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:00.430472    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:00.430608    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:00.430608    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:00.430879    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:00.925788    9664 type.go:168] "Request Body" body=""
	I0407 14:25:00.925788    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:00.925788    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:00.925788    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:00.925788    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:00.930499    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:00.930593    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:00.930593    9664 round_trippers.go:587]     Audit-Id: 62aef4e6-1b18-47b2-b52a-6919765d656a
	I0407 14:25:00.930593    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:00.930593    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:00.930593    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:00.930593    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:00.930593    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:00 GMT
	I0407 14:25:00.930801    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:01.425699    9664 type.go:168] "Request Body" body=""
	I0407 14:25:01.425699    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:01.425699    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:01.425699    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:01.425699    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:01.430342    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:01.430342    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:01.430443    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:01.430443    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:01 GMT
	I0407 14:25:01.430443    9664 round_trippers.go:587]     Audit-Id: bfaee13d-9a70-453c-9f17-334c020b066b
	I0407 14:25:01.430443    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:01.430443    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:01.430443    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:01.430904    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:01.925956    9664 type.go:168] "Request Body" body=""
	I0407 14:25:01.925956    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:01.925956    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:01.925956    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:01.925956    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:01.931123    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:01.931200    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:01.931200    9664 round_trippers.go:587]     Audit-Id: 6b8ad517-a924-4514-95b1-71846a38b965
	I0407 14:25:01.931200    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:01.931200    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:01.931255    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:01.931255    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:01.931255    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:01 GMT
	I0407 14:25:01.931599    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:02.425806    9664 type.go:168] "Request Body" body=""
	I0407 14:25:02.425806    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:02.425806    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:02.425806    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:02.425806    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:02.430726    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:02.430866    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:02.430866    9664 round_trippers.go:587]     Audit-Id: e12334c6-a2d8-41dd-8d6a-ccd5895165d8
	I0407 14:25:02.430866    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:02.430866    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:02.430866    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:02.430915    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:02.430915    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:02 GMT
	I0407 14:25:02.431016    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:02.431016    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:02.926578    9664 type.go:168] "Request Body" body=""
	I0407 14:25:02.926578    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:02.926578    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:02.926578    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:02.926578    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:02.931394    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:02.931394    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:02.931394    9664 round_trippers.go:587]     Audit-Id: 15696a70-5440-47a7-8e90-936f91eac45c
	I0407 14:25:02.931394    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:02.931394    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:02.931394    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:02.931394    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:02.931394    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:02 GMT
	I0407 14:25:02.932088    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:03.426549    9664 type.go:168] "Request Body" body=""
	I0407 14:25:03.427211    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:03.427211    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:03.427211    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:03.427211    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:03.435708    9664 round_trippers.go:581] Response Status: 200 OK in 8 milliseconds
	I0407 14:25:03.435708    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:03.435841    9664 round_trippers.go:587]     Audit-Id: 589e825f-d3a2-47c4-a1e3-31b039a1ee65
	I0407 14:25:03.435841    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:03.435841    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:03.435841    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:03.435841    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:03.435841    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:03 GMT
	I0407 14:25:03.436158    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:03.925626    9664 type.go:168] "Request Body" body=""
	I0407 14:25:03.925626    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:03.925626    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:03.925626    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:03.925626    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:03.931299    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:03.931395    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:03.931395    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:03.931395    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:03.931395    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:03.931395    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:03.931479    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:03 GMT
	I0407 14:25:03.931479    9664 round_trippers.go:587]     Audit-Id: 72ce6fa3-8d21-4487-ad2f-396ba17685b7
	I0407 14:25:03.931794    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:04.426305    9664 type.go:168] "Request Body" body=""
	I0407 14:25:04.426305    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:04.426305    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:04.426305    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:04.426305    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:04.433014    9664 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:25:04.433014    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:04.433014    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:04.433014    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:04 GMT
	I0407 14:25:04.433014    9664 round_trippers.go:587]     Audit-Id: 96661bd7-8ec3-4f7a-a28d-b2921bfdee96
	I0407 14:25:04.433014    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:04.433014    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:04.433014    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:04.433014    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:04.433567    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:04.925669    9664 type.go:168] "Request Body" body=""
	I0407 14:25:04.925669    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:04.925669    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:04.925669    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:04.925669    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:04.932988    9664 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:25:04.933018    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:04.933018    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:04 GMT
	I0407 14:25:04.933018    9664 round_trippers.go:587]     Audit-Id: f99a7bf1-2469-4aca-8617-6ab5b44b8a03
	I0407 14:25:04.933018    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:04.933018    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:04.933018    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:04.933018    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:04.933018    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:05.426505    9664 type.go:168] "Request Body" body=""
	I0407 14:25:05.427190    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:05.427190    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:05.427190    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:05.427190    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:05.431857    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:05.432239    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:05.432239    9664 round_trippers.go:587]     Audit-Id: 15a9fde2-9288-4da4-982a-cfa791fefcbf
	I0407 14:25:05.432239    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:05.432239    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:05.432239    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:05.432239    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:05.432239    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:05 GMT
	I0407 14:25:05.433876    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:05.925969    9664 type.go:168] "Request Body" body=""
	I0407 14:25:05.925969    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:05.925969    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:05.925969    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:05.925969    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:05.931166    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:05.931198    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:05.931198    9664 round_trippers.go:587]     Audit-Id: 3d6c54fb-d40d-48f0-8715-ec940ebd6700
	I0407 14:25:05.931198    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:05.931198    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:05.931198    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:05.931198    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:05.931198    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:05 GMT
	I0407 14:25:05.931566    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:06.426412    9664 type.go:168] "Request Body" body=""
	I0407 14:25:06.426412    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:06.426412    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:06.426412    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:06.426412    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:06.431915    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:06.431915    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:06.431915    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:06 GMT
	I0407 14:25:06.431915    9664 round_trippers.go:587]     Audit-Id: 9b213c6a-a4ab-496e-bbe2-2dd0fbeac773
	I0407 14:25:06.432019    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:06.432019    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:06.432019    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:06.432019    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:06.434687    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:06.434837    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:06.926115    9664 type.go:168] "Request Body" body=""
	I0407 14:25:06.926115    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:06.926115    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:06.926115    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:06.926115    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:06.932252    9664 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:25:06.932396    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:06.932396    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:06.932562    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:06.932562    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:06 GMT
	I0407 14:25:06.932562    9664 round_trippers.go:587]     Audit-Id: 5e962049-1c8f-42ed-8e17-4868bf148aec
	I0407 14:25:06.932562    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:06.932562    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:06.933192    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:07.426273    9664 type.go:168] "Request Body" body=""
	I0407 14:25:07.426273    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:07.426273    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:07.426273    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:07.426273    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:07.430727    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:07.430727    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:07.430727    9664 round_trippers.go:587]     Audit-Id: eb79b29d-7a8a-487f-ba45-e90af29cb7ac
	I0407 14:25:07.430727    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:07.430727    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:07.430727    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:07.430727    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:07.430727    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:07 GMT
	I0407 14:25:07.431266    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 99 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  34 39 38 00 42 08 08 dd  |d4df2.19498.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23229 chars]
	 >
	I0407 14:25:07.926104    9664 type.go:168] "Request Body" body=""
	I0407 14:25:07.926104    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:07.926104    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:07.926104    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:07.926104    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:07.930754    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:07.930754    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:07.930754    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:07.931022    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:07.931022    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:07.931022    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:07.931022    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:07 GMT
	I0407 14:25:07.931022    9664 round_trippers.go:587]     Audit-Id: 350f91d8-1900-4d12-88b7-b7c8b2d1637f
	I0407 14:25:07.931236    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:08.426046    9664 type.go:168] "Request Body" body=""
	I0407 14:25:08.426046    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:08.426046    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:08.426046    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:08.426046    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:08.431522    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:08.431522    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:08.431522    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:08.431522    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:08.431522    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:08.431522    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:08 GMT
	I0407 14:25:08.431522    9664 round_trippers.go:587]     Audit-Id: 99eb50f5-5758-4a84-8a7a-f7b793738a79
	I0407 14:25:08.431522    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:08.431847    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:08.926340    9664 type.go:168] "Request Body" body=""
	I0407 14:25:08.926340    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:08.926340    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:08.926340    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:08.926340    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:08.932711    9664 round_trippers.go:581] Response Status: 200 OK in 6 milliseconds
	I0407 14:25:08.932711    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:08.932711    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:08.932711    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:08.932711    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:08.932711    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:08.932711    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:08 GMT
	I0407 14:25:08.932711    9664 round_trippers.go:587]     Audit-Id: 2cbdf883-3bfc-4449-aaaf-3cacf6e20358
	I0407 14:25:08.934098    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:08.934221    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:09.426622    9664 type.go:168] "Request Body" body=""
	I0407 14:25:09.426768    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:09.426768    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:09.426768    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:09.426768    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:09.430938    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:09.430938    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:09.430938    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:09 GMT
	I0407 14:25:09.430938    9664 round_trippers.go:587]     Audit-Id: 7b7e9fd6-8c30-475f-adff-6a9bad9f4687
	I0407 14:25:09.430938    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:09.430938    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:09.430938    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:09.430938    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:09.430938    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:09.926506    9664 type.go:168] "Request Body" body=""
	I0407 14:25:09.926506    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:09.926506    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:09.926506    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:09.926506    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:09.932443    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:09.932443    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:09.932443    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:09.932443    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:09 GMT
	I0407 14:25:09.932443    9664 round_trippers.go:587]     Audit-Id: 28e4709f-8ebb-4eff-aa38-9c875c18c07b
	I0407 14:25:09.932443    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:09.932533    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:09.932533    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:09.932723    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:10.426916    9664 type.go:168] "Request Body" body=""
	I0407 14:25:10.426916    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:10.426916    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:10.426916    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:10.426916    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:10.432713    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:10.432713    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:10.432713    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:10.432713    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:10.432713    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:10 GMT
	I0407 14:25:10.432713    9664 round_trippers.go:587]     Audit-Id: c732ccb7-8fcb-4f1b-8f6d-a95b98dceb52
	I0407 14:25:10.432713    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:10.432713    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:10.432713    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:10.926243    9664 type.go:168] "Request Body" body=""
	I0407 14:25:10.926243    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:10.926243    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:10.926243    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:10.926243    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:10.931365    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:10.931908    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:10.931908    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:10.931908    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:10 GMT
	I0407 14:25:10.931908    9664 round_trippers.go:587]     Audit-Id: 5062a3ca-76cf-4e51-98d0-bd101f2321e4
	I0407 14:25:10.931908    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:10.931908    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:10.931908    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:10.932380    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:11.426262    9664 type.go:168] "Request Body" body=""
	I0407 14:25:11.426262    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:11.426262    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:11.426262    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:11.426262    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:11.431314    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:11.431314    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:11.431314    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:11.431314    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:11 GMT
	I0407 14:25:11.431314    9664 round_trippers.go:587]     Audit-Id: 6bc6b6d7-a3df-40c1-89bf-0729b0c1af1f
	I0407 14:25:11.431314    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:11.431314    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:11.431314    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:11.431314    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:11.431314    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:11.927033    9664 type.go:168] "Request Body" body=""
	I0407 14:25:11.927033    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:11.927033    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:11.927033    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:11.927033    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:11.931454    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:11.931454    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:11.931454    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:11.931454    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:11.931454    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:11 GMT
	I0407 14:25:11.931454    9664 round_trippers.go:587]     Audit-Id: 9518686b-9cab-450b-8b3f-4e05c299019a
	I0407 14:25:11.931454    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:11.931454    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:11.931454    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:12.425622    9664 type.go:168] "Request Body" body=""
	I0407 14:25:12.425622    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:12.425622    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:12.425622    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:12.425622    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:12.429716    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:12.429787    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:12.429787    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:12.429787    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:12.429787    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:12.429787    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:12 GMT
	I0407 14:25:12.429787    9664 round_trippers.go:587]     Audit-Id: cf459972-04e3-4753-85e7-6004731394e5
	I0407 14:25:12.429787    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:12.430237    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:12.926113    9664 type.go:168] "Request Body" body=""
	I0407 14:25:12.926113    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:12.926113    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:12.926113    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:12.926113    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:12.937916    9664 round_trippers.go:581] Response Status: 200 OK in 11 milliseconds
	I0407 14:25:12.937979    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:12.937979    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:12.937979    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:12.937979    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:12.937979    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:12 GMT
	I0407 14:25:12.937979    9664 round_trippers.go:587]     Audit-Id: 5613e380-f60b-4f5d-a960-8f3b0f725a5b
	I0407 14:25:12.937979    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:12.938414    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:13.426643    9664 type.go:168] "Request Body" body=""
	I0407 14:25:13.426643    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:13.426643    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:13.426643    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:13.426643    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:13.430976    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:13.430976    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:13.430976    9664 round_trippers.go:587]     Audit-Id: f442f1a3-8a2d-4c2e-9c54-156d2761773b
	I0407 14:25:13.430976    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:13.430976    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:13.430976    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:13.430976    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:13.430976    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:13 GMT
	I0407 14:25:13.430976    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:13.431687    9664 node_ready.go:53] node "multinode-140200" has status "Ready":"False"
	I0407 14:25:13.926520    9664 type.go:168] "Request Body" body=""
	I0407 14:25:13.926520    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:13.926520    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:13.926520    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:13.926520    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:13.932457    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:13.932457    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:13.932457    9664 round_trippers.go:587]     Audit-Id: 1af65f19-03cb-4324-8e8c-470e47f5b172
	I0407 14:25:13.932457    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:13.932457    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:13.932457    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:13.932457    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:13.932457    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:13 GMT
	I0407 14:25:13.932792    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:14.425850    9664 type.go:168] "Request Body" body=""
	I0407 14:25:14.426375    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:14.426375    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:14.426375    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:14.426375    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:14.430968    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:14.430968    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:14.430968    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:14.430968    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:14.430968    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:14 GMT
	I0407 14:25:14.430968    9664 round_trippers.go:587]     Audit-Id: b3d6a534-cd3d-4fed-a8ca-a3d3033f8e7d
	I0407 14:25:14.430968    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:14.430968    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:14.430968    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:14.926259    9664 type.go:168] "Request Body" body=""
	I0407 14:25:14.926793    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:14.926920    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:14.926988    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:14.926988    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:14.930947    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:14.931060    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:14.931060    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:14.931060    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:14.931060    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:14.931060    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:14 GMT
	I0407 14:25:14.931060    9664 round_trippers.go:587]     Audit-Id: 05b10e81-50ed-42f9-9298-ef3dac5f01d3
	I0407 14:25:14.931060    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:14.931372    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 d1 26 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..&.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 31 39  39 32 38 00 42 08 08 dd  |d4df2.19928.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 23537 chars]
	 >
	I0407 14:25:15.426365    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.426579    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.426579    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.426579    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.426698    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.430743    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:15.430743    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.430908    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.430908    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.430908    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.430908    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.430908    9664 round_trippers.go:587]     Audit-Id: 59a7bd15-0fb1-4b78-a2b8-e975f02f6e95
	I0407 14:25:15.430908    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.431386    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.431607    9664 node_ready.go:49] node "multinode-140200" has status "Ready":"True"
	I0407 14:25:15.431724    9664 node_ready.go:38] duration metric: took 15.0061683s for node "multinode-140200" to be "Ready" ...
	I0407 14:25:15.431724    9664 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:25:15.431944    9664 type.go:204] "Request Body" body=""
	I0407 14:25:15.431944    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:15.432090    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.432090    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.432090    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.435465    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:15.435465    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.435465    9664 round_trippers.go:587]     Audit-Id: fdec0051-74c1-4c67-b56e-e084b58255a5
	I0407 14:25:15.436320    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.436320    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.436320    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.436320    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.436320    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.440368    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e7 e8 03 0a  0a 0a 00 12 04 32 30 31  |ist..........201|
		00000020  37 1a 00 12 c1 28 0a af  19 0a 18 63 6f 72 65 64  |7....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 32  |-ad5c41ff9a932.2|
		00000090  30 30 39 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |0098.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308089 chars]
	 >
	I0407 14:25:15.440837    9664 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.440837    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.441409    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/coredns-668d6bf9bc-5fp4f
	I0407 14:25:15.441450    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.441450    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.441450    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.444720    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:15.444720    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.444720    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.444720    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.444720    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.444720    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.444720    9664 round_trippers.go:587]     Audit-Id: 0f160a44-3f16-442c-b309-ca10dd5bfbca
	I0407 14:25:15.444720    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.444720    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  c1 28 0a af 19 0a 18 63  6f 72 65 64 6e 73 2d 36  |.(.....coredns-6|
		00000020  36 38 64 36 62 66 39 62  63 2d 35 66 70 34 66 12  |68d6bf9bc-5fp4f.|
		00000030  13 63 6f 72 65 64 6e 73  2d 36 36 38 64 36 62 66  |.coredns-668d6bf|
		00000040  39 62 63 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |9bc-..kube-syste|
		00000050  6d 22 00 2a 24 34 33 37  32 32 36 61 65 2d 65 36  |m".*$437226ae-e6|
		00000060  33 64 2d 34 32 34 35 2d  62 62 65 61 2d 61 64 35  |3d-4245-bbea-ad5|
		00000070  63 34 31 66 66 39 61 39  33 32 04 32 30 30 39 38  |c41ff9a932.20098|
		00000080  00 42 08 08 e6 b4 cf bf  06 10 00 5a 13 0a 07 6b  |.B.........Z...k|
		00000090  38 73 2d 61 70 70 12 08  6b 75 62 65 2d 64 6e 73  |8s-app..kube-dns|
		000000a0  5a 1f 0a 11 70 6f 64 2d  74 65 6d 70 6c 61 74 65  |Z...pod-template|
		000000b0  2d 68 61 73 68 12 0a 36  36 38 64 36 62 66 39 62  |-hash..668d6bf9b|
		000000c0  63 6a 53 0a 0a 52 65 70  6c 69 63 61 53 65 74 1a  |cjS..ReplicaSet [truncated 24721 chars]
	 >
	I0407 14:25:15.444720    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.444720    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.444720    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.444720    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.444720    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.447824    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:15.447824    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.447824    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.447824    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.447824    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.447824    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.447824    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.447824    9664 round_trippers.go:587]     Audit-Id: 8e183c6c-e479-40d4-9bfe-8363c086f46d
	I0407 14:25:15.447824    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.447824    9664 pod_ready.go:93] pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.447824    9664 pod_ready.go:82] duration metric: took 6.9868ms for pod "coredns-668d6bf9bc-5fp4f" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.447824    9664 pod_ready.go:79] waiting up to 6m0s for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.447824    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.448855    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/etcd-multinode-140200
	I0407 14:25:15.448947    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.448947    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.448966    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.451206    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.451206    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.451206    9664 round_trippers.go:587]     Audit-Id: ce4ba5ab-b3da-4534-8071-4bc25c2c128e
	I0407 14:25:15.451206    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.451206    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.451206    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.451206    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.451206    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.452563    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  eb 2b 0a 9b 1a 0a 15 65  74 63 64 2d 6d 75 6c 74  |.+.....etcd-mult|
		00000020  69 6e 6f 64 65 2d 31 34  30 32 30 30 12 00 1a 0b  |inode-140200....|
		00000030  6b 75 62 65 2d 73 79 73  74 65 6d 22 00 2a 24 35  |kube-system".*$5|
		00000040  30 65 38 34 63 35 36 2d  35 64 37 38 2d 34 61 35  |0e84c56-5d78-4a5|
		00000050  31 2d 62 64 36 33 2d 34  61 37 32 34 63 63 64 35  |1-bd63-4a724ccd5|
		00000060  66 64 38 32 04 31 39 39  30 38 00 42 08 08 b7 c0  |fd82.19908.B....|
		00000070  cf bf 06 10 00 5a 11 0a  09 63 6f 6d 70 6f 6e 65  |.....Z...compone|
		00000080  6e 74 12 04 65 74 63 64  5a 15 0a 04 74 69 65 72  |nt..etcdZ...tier|
		00000090  12 0d 63 6f 6e 74 72 6f  6c 2d 70 6c 61 6e 65 62  |..control-planeb|
		000000a0  4d 0a 30 6b 75 62 65 61  64 6d 2e 6b 75 62 65 72  |M.0kubeadm.kuber|
		000000b0  6e 65 74 65 73 2e 69 6f  2f 65 74 63 64 2e 61 64  |netes.io/etcd.ad|
		000000c0  76 65 72 74 69 73 65 2d  63 6c 69 65 6e 74 2d 75  |vertise-client- [truncated 26848 chars]
	 >
	I0407 14:25:15.452889    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.452946    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.452946    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.453017    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.453058    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.455502    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.455607    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.455607    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.455607    9664 round_trippers.go:587]     Audit-Id: f5da5501-496f-44c7-a855-35c0827478b3
	I0407 14:25:15.455607    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.455683    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.455683    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.455683    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.455683    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.455683    9664 pod_ready.go:93] pod "etcd-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.455683    9664 pod_ready.go:82] duration metric: took 7.8593ms for pod "etcd-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.456308    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.456308    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.456308    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-multinode-140200
	I0407 14:25:15.456308    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.456308    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.456308    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.458618    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.459364    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.459364    9664 round_trippers.go:587]     Audit-Id: 867be509-de80-4e9b-8423-997f6b605908
	I0407 14:25:15.459364    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.459364    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.459498    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.459498    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.459498    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.460242    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  9b 35 0a ab 1c 0a 1f 6b  75 62 65 2d 61 70 69 73  |.5.....kube-apis|
		00000020  65 72 76 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |erver-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 31 34 34 37 35 33 64  |ystem".*$144753d|
		00000050  63 2d 63 36 32 31 2d 34  35 66 37 2d 61 39 34 61  |c-c621-45f7-a94a|
		00000060  2d 38 62 33 38 33 35 65  65 62 62 31 32 32 04 31  |-8b3835eebb122.1|
		00000070  39 38 32 38 00 42 08 08  b7 c0 cf bf 06 10 00 5a  |9828.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 61 70 69 73 65  72 76 65 72 5a 15 0a 04  |be-apiserverZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 54 0a 3f 6b  75 62 65 61 64 6d 2e 6b  |anebT.?kubeadm.k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 6b 75 62  |ubernetes.io/ku [truncated 32773 chars]
	 >
	I0407 14:25:15.460697    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.460788    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.460788    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.460839    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.460839    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.463330    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.463450    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.463450    9664 round_trippers.go:587]     Audit-Id: e636f6a7-8a11-4ce9-b274-22634c129ca6
	I0407 14:25:15.463450    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.463506    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.463506    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.463506    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.463506    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.464218    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.464436    9664 pod_ready.go:93] pod "kube-apiserver-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.464436    9664 pod_ready.go:82] duration metric: took 8.1279ms for pod "kube-apiserver-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.464515    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.464650    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.464755    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-multinode-140200
	I0407 14:25:15.464755    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.464814    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.464814    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.467183    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.467183    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.467183    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.467183    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.467183    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.467183    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.467183    9664 round_trippers.go:587]     Audit-Id: 8c4828ca-f0fb-4aa4-8da7-a97647cb95a1
	I0407 14:25:15.467183    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.467183    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  d5 31 0a 99 1d 0a 28 6b  75 62 65 2d 63 6f 6e 74  |.1....(kube-cont|
		00000020  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 2d 6d  |roller-manager-m|
		00000030  75 6c 74 69 6e 6f 64 65  2d 31 34 30 32 30 30 12  |ultinode-140200.|
		00000040  00 1a 0b 6b 75 62 65 2d  73 79 73 74 65 6d 22 00  |...kube-system".|
		00000050  2a 24 61 37 63 36 65 33  62 62 2d 31 39 37 63 2d  |*$a7c6e3bb-197c-|
		00000060  34 33 34 65 2d 39 66 31  39 2d 37 34 64 37 65 34  |434e-9f19-74d7e4|
		00000070  38 62 35 30 64 65 32 04  31 39 39 33 38 00 42 08  |8b50de2.19938.B.|
		00000080  08 e0 b4 cf bf 06 10 00  5a 24 0a 09 63 6f 6d 70  |........Z$..comp|
		00000090  6f 6e 65 6e 74 12 17 6b  75 62 65 2d 63 6f 6e 74  |onent..kube-cont|
		000000a0  72 6f 6c 6c 65 72 2d 6d  61 6e 61 67 65 72 5a 15  |roller-managerZ.|
		000000b0  0a 04 74 69 65 72 12 0d  63 6f 6e 74 72 6f 6c 2d  |..tier..control-|
		000000c0  70 6c 61 6e 65 62 3d 0a  19 6b 75 62 65 72 6e 65  |planeb=..kubern [truncated 30492 chars]
	 >
	I0407 14:25:15.467183    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.467183    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:15.467183    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.467183    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.467183    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.470037    9664 round_trippers.go:581] Response Status: 200 OK in 2 milliseconds
	I0407 14:25:15.470037    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.470037    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.470037    9664 round_trippers.go:587]     Audit-Id: f09c00b8-345b-4748-b7e4-65aa9b8577e4
	I0407 14:25:15.470037    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.470037    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.470037    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.470037    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.471264    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:15.471264    9664 pod_ready.go:93] pod "kube-controller-manager-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.471452    9664 pod_ready.go:82] duration metric: took 6.9371ms for pod "kube-controller-manager-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.471452    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.471452    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.626492    9664 request.go:661] Waited for 154.8699ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:25:15.626492    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-2r7lj
	I0407 14:25:15.626492    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.626492    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.626492    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.632044    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:15.632108    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.632108    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.632108    9664 round_trippers.go:587]     Audit-Id: 2feefe9b-59fc-42c1-b2b6-c13f94f57eba
	I0407 14:25:15.632108    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.632108    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.632108    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.632167    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.632497    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a0 25 0a be 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.%.....kube-prox|
		00000020  79 2d 32 72 37 6c 6a 12  0b 6b 75 62 65 2d 70 72  |y-2r7lj..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 34 38 39  32 64 37 30 33 2d 66 63  |m".*$4892d703-fc|
		00000050  34 33 2d 34 66 36 37 2d  38 34 39 33 2d 65 61 65  |43-4f67-8493-eae|
		00000060  61 65 38 63 35 65 37 36  35 32 03 36 33 32 38 00  |ae8c5e7652.6328.|
		00000070  42 08 08 a0 b6 cf bf 06  10 00 5a 26 0a 18 63 6f  |B.........Z&..co|
		00000080  6e 74 72 6f 6c 6c 65 72  2d 72 65 76 69 73 69 6f  |ntroller-revisio|
		00000090  6e 2d 68 61 73 68 12 0a  37 62 62 38 34 63 34 39  |n-hash..7bb84c49|
		000000a0  38 34 5a 15 0a 07 6b 38  73 2d 61 70 70 12 0a 6b  |84Z...k8s-app..k|
		000000b0  75 62 65 2d 70 72 6f 78  79 5a 1c 0a 17 70 6f 64  |ube-proxyZ...pod|
		000000c0  2d 74 65 6d 70 6c 61 74  65 2d 67 65 6e 65 72 61  |-template-gener [truncated 22666 chars]
	 >
	I0407 14:25:15.632663    9664 type.go:168] "Request Body" body=""
	I0407 14:25:15.827616    9664 request.go:661] Waited for 194.951ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:25:15.827859    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m02
	I0407 14:25:15.827859    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:15.827859    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:15.827859    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:15.831936    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:15.831936    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Audit-Id: 1bb33d8a-e4b4-44f8-8f48-d5e8e9fdaea4
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:15.831936    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:15.831936    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Content-Length: 3463
	I0407 14:25:15.831936    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:15 GMT
	I0407 14:25:15.831936    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 f0 1a 0a b0 0f 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 32 12 00 1a 00  |e-140200-m02....|
		00000030  22 00 2a 24 35 63 65 62  64 65 61 32 2d 38 63 30  |".*$5cebdea2-8c0|
		00000040  37 2d 34 33 37 37 2d 38  33 38 63 2d 39 63 33 37  |7-4377-838c-9c37|
		00000050  66 32 66 33 30 30 31 37  32 04 31 37 38 32 38 00  |f2f300172.17828.|
		00000060  42 08 08 a0 b6 cf bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 16110 chars]
	 >
	I0407 14:25:15.832581    9664 pod_ready.go:93] pod "kube-proxy-2r7lj" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:15.832581    9664 pod_ready.go:82] duration metric: took 361.1264ms for pod "kube-proxy-2r7lj" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.832659    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:15.832838    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.027190    9664 request.go:661] Waited for 194.2878ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:25:16.027190    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-9rx2d
	I0407 14:25:16.027190    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.027190    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.027190    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.034868    9664 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:25:16.034868    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.034868    9664 round_trippers.go:587]     Audit-Id: 4133775b-1bd3-42ca-899e-4945d6d50d6d
	I0407 14:25:16.034868    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.034868    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.034933    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.034933    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.034933    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.035216    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  87 26 0a bf 14 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 39 72 78 32 64 12  0b 6b 75 62 65 2d 70 72  |y-9rx2d..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 32 65 61  61 62 32 35 64 2d 66 65  |m".*$2eaab25d-fe|
		00000050  30 62 2d 34 63 34 38 2d  61 63 36 62 2d 34 32 30  |0b-4c48-ac6b-420|
		00000060  39 35 66 35 66 62 63 65  36 32 04 31 39 36 35 38  |95f5fbce62.19658|
		00000070  00 42 08 08 e5 b4 cf bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23147 chars]
	 >
	I0407 14:25:16.035505    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.226670    9664 request.go:661] Waited for 191.164ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:16.226670    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:16.226670    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.226670    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.226670    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.231184    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:16.231184    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.231293    9664 round_trippers.go:587]     Audit-Id: 1eddef35-1f56-488b-8773-b8e54f7a9a94
	I0407 14:25:16.231293    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.231293    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.231293    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.231293    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.231293    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.231484    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:16.231484    9664 pod_ready.go:93] pod "kube-proxy-9rx2d" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:16.231484    9664 pod_ready.go:82] duration metric: took 398.8218ms for pod "kube-proxy-9rx2d" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:16.231484    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-kvg58" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:16.232022    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.428908    9664 request.go:661] Waited for 196.8841ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kvg58
	I0407 14:25:16.428908    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-proxy-kvg58
	I0407 14:25:16.428908    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.429507    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.429507    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.434571    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:16.434571    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.434571    9664 round_trippers.go:587]     Audit-Id: 8dae401b-da2a-4e6b-aa61-a77b63eace82
	I0407 14:25:16.434571    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.434571    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.434571    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.434571    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.434571    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.435200    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  a5 26 0a c2 15 0a 10 6b  75 62 65 2d 70 72 6f 78  |.&.....kube-prox|
		00000020  79 2d 6b 76 67 35 38 12  0b 6b 75 62 65 2d 70 72  |y-kvg58..kube-pr|
		00000030  6f 78 79 2d 1a 0b 6b 75  62 65 2d 73 79 73 74 65  |oxy-..kube-syste|
		00000040  6d 22 00 2a 24 62 61 38  61 33 33 32 63 2d 62 62  |m".*$ba8a332c-bb|
		00000050  34 61 2d 34 65 39 63 2d  39 61 34 65 2d 32 63 35  |4a-4e9c-9a4e-2c5|
		00000060  37 38 62 64 63 39 39 63  31 32 04 31 38 33 36 38  |78bdc99c12.18368|
		00000070  00 42 08 08 c8 b8 cf bf  06 10 00 5a 26 0a 18 63  |.B.........Z&..c|
		00000080  6f 6e 74 72 6f 6c 6c 65  72 2d 72 65 76 69 73 69  |ontroller-revisi|
		00000090  6f 6e 2d 68 61 73 68 12  0a 37 62 62 38 34 63 34  |on-hash..7bb84c4|
		000000a0  39 38 34 5a 15 0a 07 6b  38 73 2d 61 70 70 12 0a  |984Z...k8s-app..|
		000000b0  6b 75 62 65 2d 70 72 6f  78 79 5a 1c 0a 17 70 6f  |kube-proxyZ...po|
		000000c0  64 2d 74 65 6d 70 6c 61  74 65 2d 67 65 6e 65 72  |d-template-gene [truncated 23303 chars]
	 >
	I0407 14:25:16.435576    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.627305    9664 request.go:661] Waited for 191.7268ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m03
	I0407 14:25:16.627305    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200-m03
	I0407 14:25:16.627305    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.627305    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.627305    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.632126    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:16.632126    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.632126    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.632126    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.632126    9664 round_trippers.go:587]     Content-Length: 3882
	I0407 14:25:16.632304    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.632304    9664 round_trippers.go:587]     Audit-Id: 61102d7b-050e-4989-8035-0b26574af3a6
	I0407 14:25:16.632304    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.632304    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.632470    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 93 1e 0a eb 12 0a 14  6d 75 6c 74 69 6e 6f 64  |........multinod|
		00000020  65 2d 31 34 30 32 30 30  2d 6d 30 33 12 00 1a 00  |e-140200-m03....|
		00000030  22 00 2a 24 64 33 34 31  65 64 66 63 2d 36 33 31  |".*$d341edfc-631|
		00000040  35 2d 34 62 37 62 2d 38  33 30 34 2d 66 39 32 62  |5-4b7b-8304-f92b|
		00000050  63 34 32 31 32 65 39 33  32 04 31 39 35 32 38 00  |c4212e932.19528.|
		00000060  42 08 08 89 be cf bf 06  10 00 5a 20 0a 17 62 65  |B.........Z ..be|
		00000070  74 61 2e 6b 75 62 65 72  6e 65 74 65 73 2e 69 6f  |ta.kubernetes.io|
		00000080  2f 61 72 63 68 12 05 61  6d 64 36 34 5a 1e 0a 15  |/arch..amd64Z...|
		00000090  62 65 74 61 2e 6b 75 62  65 72 6e 65 74 65 73 2e  |beta.kubernetes.|
		000000a0  69 6f 2f 6f 73 12 05 6c  69 6e 75 78 5a 1b 0a 12  |io/os..linuxZ...|
		000000b0  6b 75 62 65 72 6e 65 74  65 73 2e 69 6f 2f 61 72  |kubernetes.io/ar|
		000000c0  63 68 12 05 61 6d 64 36  34 5a 2e 0a 16 6b 75 62  |ch..amd64Z...ku [truncated 18167 chars]
	 >
	I0407 14:25:16.632470    9664 pod_ready.go:98] node "multinode-140200-m03" hosting pod "kube-proxy-kvg58" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200-m03" has status "Ready":"Unknown"
	I0407 14:25:16.632470    9664 pod_ready.go:82] duration metric: took 400.9828ms for pod "kube-proxy-kvg58" in "kube-system" namespace to be "Ready" ...
	E0407 14:25:16.632470    9664 pod_ready.go:67] WaitExtra: waitPodCondition: node "multinode-140200-m03" hosting pod "kube-proxy-kvg58" in "kube-system" namespace is currently not "Ready" (skipping!): node "multinode-140200-m03" has status "Ready":"Unknown"
	I0407 14:25:16.632470    9664 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:16.632470    9664 type.go:168] "Request Body" body=""
	I0407 14:25:16.826703    9664 request.go:661] Waited for 194.2321ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:25:16.827192    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-multinode-140200
	I0407 14:25:16.827192    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:16.827252    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:16.827252    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:16.831464    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:16.831542    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:16.831542    9664 round_trippers.go:587]     Audit-Id: c0fd04ad-4d96-4e18-a249-2eddce1771e1
	I0407 14:25:16.831542    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:16.831601    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:16.831601    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:16.831601    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:16.831601    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:16 GMT
	I0407 14:25:16.831988    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 09 0a 02  76 31 12 03 50 6f 64 12  |k8s.....v1..Pod.|
		00000010  e0 23 0a 81 18 0a 1f 6b  75 62 65 2d 73 63 68 65  |.#.....kube-sche|
		00000020  64 75 6c 65 72 2d 6d 75  6c 74 69 6e 6f 64 65 2d  |duler-multinode-|
		00000030  31 34 30 32 30 30 12 00  1a 0b 6b 75 62 65 2d 73  |140200....kube-s|
		00000040  79 73 74 65 6d 22 00 2a  24 38 38 64 66 65 65 65  |ystem".*$88dfeee|
		00000050  38 2d 61 33 63 31 2d 34  38 35 62 2d 61 62 66 65  |8-a3c1-485b-abfe|
		00000060  2d 39 65 61 66 30 30 35  37 64 36 63 66 32 04 31  |-9eaf0057d6cf2.1|
		00000070  39 37 35 38 00 42 08 08  e0 b4 cf bf 06 10 00 5a  |9758.B.........Z|
		00000080  1b 0a 09 63 6f 6d 70 6f  6e 65 6e 74 12 0e 6b 75  |...component..ku|
		00000090  62 65 2d 73 63 68 65 64  75 6c 65 72 5a 15 0a 04  |be-schedulerZ...|
		000000a0  74 69 65 72 12 0d 63 6f  6e 74 72 6f 6c 2d 70 6c  |tier..control-pl|
		000000b0  61 6e 65 62 3d 0a 19 6b  75 62 65 72 6e 65 74 65  |aneb=..kubernete|
		000000c0  73 2e 69 6f 2f 63 6f 6e  66 69 67 2e 68 61 73 68  |s.io/config.has [truncated 21718 chars]
	 >
	I0407 14:25:16.832343    9664 type.go:168] "Request Body" body=""
	I0407 14:25:17.027209    9664 request.go:661] Waited for 194.8639ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:17.027209    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes/multinode-140200
	I0407 14:25:17.027209    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.027691    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.027789    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.031445    9664 round_trippers.go:581] Response Status: 200 OK in 3 milliseconds
	I0407 14:25:17.031554    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.031610    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.031610    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.031610    9664 round_trippers.go:587]     Audit-Id: 56520e97-af50-468f-b1c2-8d1718e05385
	I0407 14:25:17.031610    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.031610    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.031610    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.032159    9664 type.go:168] "Response Body" body=<
		00000000  6b 38 73 00 0a 0a 0a 02  76 31 12 04 4e 6f 64 65  |k8s.....v1..Node|
		00000010  12 97 25 0a 86 12 0a 10  6d 75 6c 74 69 6e 6f 64  |..%.....multinod|
		00000020  65 2d 31 34 30 32 30 30  12 00 1a 00 22 00 2a 24  |e-140200....".*$|
		00000030  31 66 35 33 62 34 63 64  2d 61 62 30 31 2d 34 32  |1f53b4cd-ab01-42|
		00000040  63 61 2d 61 36 61 36 2d  61 39 33 65 66 63 39 62  |ca-a6a6-a93efc9b|
		00000050  64 34 64 66 32 04 32 30  31 37 38 00 42 08 08 dd  |d4df2.20178.B...|
		00000060  b4 cf bf 06 10 00 5a 20  0a 17 62 65 74 61 2e 6b  |......Z ..beta.k|
		00000070  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/arc|
		00000080  68 12 05 61 6d 64 36 34  5a 1e 0a 15 62 65 74 61  |h..amd64Z...beta|
		00000090  2e 6b 75 62 65 72 6e 65  74 65 73 2e 69 6f 2f 6f  |.kubernetes.io/o|
		000000a0  73 12 05 6c 69 6e 75 78  5a 1b 0a 12 6b 75 62 65  |s..linuxZ...kube|
		000000b0  72 6e 65 74 65 73 2e 69  6f 2f 61 72 63 68 12 05  |rnetes.io/arch..|
		000000c0  61 6d 64 36 34 5a 2a 0a  16 6b 75 62 65 72 6e 65  |amd64Z*..kubern [truncated 22595 chars]
	 >
	I0407 14:25:17.032367    9664 pod_ready.go:93] pod "kube-scheduler-multinode-140200" in "kube-system" namespace has status "Ready":"True"
	I0407 14:25:17.032434    9664 pod_ready.go:82] duration metric: took 399.8943ms for pod "kube-scheduler-multinode-140200" in "kube-system" namespace to be "Ready" ...
	I0407 14:25:17.032434    9664 pod_ready.go:39] duration metric: took 1.6005899s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 14:25:17.032546    9664 api_server.go:52] waiting for apiserver process to appear ...
	I0407 14:25:17.045912    9664 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:25:17.072888    9664 command_runner.go:130] > 1986
	I0407 14:25:17.072953    9664 api_server.go:72] duration metric: took 16.9589627s to wait for apiserver process to appear ...
	I0407 14:25:17.072953    9664 api_server.go:88] waiting for apiserver healthz status ...
	I0407 14:25:17.072953    9664 api_server.go:253] Checking apiserver healthz at https://172.17.81.10:8443/healthz ...
	I0407 14:25:17.082197    9664 api_server.go:279] https://172.17.81.10:8443/healthz returned 200:
	ok
	I0407 14:25:17.082621    9664 discovery_client.go:658] "Request Body" body=""
	I0407 14:25:17.082694    9664 round_trippers.go:470] GET https://172.17.81.10:8443/version
	I0407 14:25:17.082694    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.082694    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.082694    9664 round_trippers.go:480]     Accept: application/json, */*
	I0407 14:25:17.084383    9664 round_trippers.go:581] Response Status: 200 OK in 1 milliseconds
	I0407 14:25:17.084432    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.084432    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.084432    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.084432    9664 round_trippers.go:587]     Content-Length: 263
	I0407 14:25:17.084432    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.084432    9664 round_trippers.go:587]     Audit-Id: dc307c05-c46b-4d5e-86cf-6c06e60c28c3
	I0407 14:25:17.084486    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.084486    9664 round_trippers.go:587]     Content-Type: application/json
	I0407 14:25:17.084486    9664 discovery_client.go:658] "Response Body" body=<
		{
		  "major": "1",
		  "minor": "32",
		  "gitVersion": "v1.32.2",
		  "gitCommit": "67a30c0adcf52bd3f56ff0893ce19966be12991f",
		  "gitTreeState": "clean",
		  "buildDate": "2025-02-12T21:19:47Z",
		  "goVersion": "go1.23.6",
		  "compiler": "gc",
		  "platform": "linux/amd64"
		}
	 >
	I0407 14:25:17.084540    9664 api_server.go:141] control plane version: v1.32.2
	I0407 14:25:17.084600    9664 api_server.go:131] duration metric: took 11.6471ms to wait for apiserver health ...
	I0407 14:25:17.084600    9664 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 14:25:17.084662    9664 type.go:204] "Request Body" body=""
	I0407 14:25:17.227083    9664 request.go:661] Waited for 142.3652ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:17.227686    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:17.227686    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.227686    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.227686    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.233351    9664 round_trippers.go:581] Response Status: 200 OK in 5 milliseconds
	I0407 14:25:17.233351    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.233351    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.233351    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.233351    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.233351    9664 round_trippers.go:587]     Audit-Id: 5922b792-2adc-4725-9b72-24bf3560bfc8
	I0407 14:25:17.233351    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.233351    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.236180    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e7 e8 03 0a  0a 0a 00 12 04 32 30 31  |ist..........201|
		00000020  38 1a 00 12 c1 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 32  |-ad5c41ff9a932.2|
		00000090  30 30 39 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |0098.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308089 chars]
	 >
	I0407 14:25:17.237245    9664 system_pods.go:59] 12 kube-system pods found
	I0407 14:25:17.237349    9664 system_pods.go:61] "coredns-668d6bf9bc-5fp4f" [437226ae-e63d-4245-bbea-ad5c41ff9a93] Running
	I0407 14:25:17.237349    9664 system_pods.go:61] "etcd-multinode-140200" [50e84c56-5d78-4a51-bd63-4a724ccd5fd8] Running
	I0407 14:25:17.237349    9664 system_pods.go:61] "kindnet-pv67r" [5f3d17bc-3df2-48f9-9840-641673243750] Running
	I0407 14:25:17.237349    9664 system_pods.go:61] "kindnet-rnp2q" [e28e853b-b703-4a36-90d2-3af1a37e74e0] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kindnet-zkw9q" [123858da-6f70-4b10-b38e-bd930d21dbe4] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-apiserver-multinode-140200" [144753dc-c621-45f7-a94a-8b3835eebb12] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-controller-manager-multinode-140200" [a7c6e3bb-197c-434e-9f19-74d7e48b50de] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-proxy-2r7lj" [4892d703-fc43-4f67-8493-eaeae8c5e765] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-proxy-9rx2d" [2eaab25d-fe0b-4c48-ac6b-42095f5fbce6] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-proxy-kvg58" [ba8a332c-bb4a-4e9c-9a4e-2c578bdc99c1] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "kube-scheduler-multinode-140200" [88dfeee8-a3c1-485b-abfe-9eaf0057d6cf] Running
	I0407 14:25:17.237431    9664 system_pods.go:61] "storage-provisioner" [01df03d8-8816-480c-941b-180069d26997] Running
	I0407 14:25:17.237431    9664 system_pods.go:74] duration metric: took 152.8296ms to wait for pod list to return data ...
	I0407 14:25:17.237431    9664 default_sa.go:34] waiting for default service account to be created ...
	I0407 14:25:17.237573    9664 type.go:204] "Request Body" body=""
	I0407 14:25:17.426716    9664 request.go:661] Waited for 189.1415ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/default/serviceaccounts
	I0407 14:25:17.426993    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/default/serviceaccounts
	I0407 14:25:17.426993    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.426993    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.427189    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.434636    9664 round_trippers.go:581] Response Status: 200 OK in 7 milliseconds
	I0407 14:25:17.434636    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.434636    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.434636    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Content-Length: 129
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Audit-Id: 6630aae9-ebe9-423b-9ce6-95e466f9ac4d
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.434753    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.434830    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 18 0a 02  76 31 12 12 53 65 72 76  |k8s.....v1..Serv|
		00000010  69 63 65 41 63 63 6f 75  6e 74 4c 69 73 74 12 5d  |iceAccountList.]|
		00000020  0a 0a 0a 00 12 04 32 30  31 38 1a 00 12 4f 0a 4d  |......2018...O.M|
		00000030  0a 07 64 65 66 61 75 6c  74 12 00 1a 07 64 65 66  |..default....def|
		00000040  61 75 6c 74 22 00 2a 24  66 66 31 39 65 66 62 31  |ault".*$ff19efb1|
		00000050  2d 63 35 63 63 2d 34 63  39 30 2d 62 63 36 61 2d  |-c5cc-4c90-bc6a-|
		00000060  31 36 33 38 65 32 62 61  39 39 37 38 32 03 33 33  |1638e2ba99782.33|
		00000070  34 38 00 42 08 08 e5 b4  cf bf 06 10 00 1a 00 22  |48.B..........."|
		00000080  00                                                |.|
	 >
	I0407 14:25:17.434860    9664 default_sa.go:45] found service account: "default"
	I0407 14:25:17.434860    9664 default_sa.go:55] duration metric: took 197.4278ms for default service account to be created ...
	I0407 14:25:17.434860    9664 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 14:25:17.434860    9664 type.go:204] "Request Body" body=""
	I0407 14:25:17.626982    9664 request.go:661] Waited for 192.1202ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:17.627467    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/namespaces/kube-system/pods
	I0407 14:25:17.627467    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.627467    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.627467    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.631976    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:17.631976    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.631976    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.631976    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.631976    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.631976    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.631976    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.631976    9664 round_trippers.go:587]     Audit-Id: b54fd5ea-e2b5-4b1c-a1a4-8582f7d41d77
	I0407 14:25:17.634714    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0d 0a 02  76 31 12 07 50 6f 64 4c  |k8s.....v1..PodL|
		00000010  69 73 74 12 e7 e8 03 0a  0a 0a 00 12 04 32 30 31  |ist..........201|
		00000020  38 1a 00 12 c1 28 0a af  19 0a 18 63 6f 72 65 64  |8....(.....cored|
		00000030  6e 73 2d 36 36 38 64 36  62 66 39 62 63 2d 35 66  |ns-668d6bf9bc-5f|
		00000040  70 34 66 12 13 63 6f 72  65 64 6e 73 2d 36 36 38  |p4f..coredns-668|
		00000050  64 36 62 66 39 62 63 2d  1a 0b 6b 75 62 65 2d 73  |d6bf9bc-..kube-s|
		00000060  79 73 74 65 6d 22 00 2a  24 34 33 37 32 32 36 61  |ystem".*$437226a|
		00000070  65 2d 65 36 33 64 2d 34  32 34 35 2d 62 62 65 61  |e-e63d-4245-bbea|
		00000080  2d 61 64 35 63 34 31 66  66 39 61 39 33 32 04 32  |-ad5c41ff9a932.2|
		00000090  30 30 39 38 00 42 08 08  e6 b4 cf bf 06 10 00 5a  |0098.B.........Z|
		000000a0  13 0a 07 6b 38 73 2d 61  70 70 12 08 6b 75 62 65  |...k8s-app..kube|
		000000b0  2d 64 6e 73 5a 1f 0a 11  70 6f 64 2d 74 65 6d 70  |-dnsZ...pod-temp|
		000000c0  6c 61 74 65 2d 68 61 73  68 12 0a 36 36 38 64 36  |late-hash..668d [truncated 308089 chars]
	 >
	I0407 14:25:17.635499    9664 system_pods.go:86] 12 kube-system pods found
	I0407 14:25:17.635568    9664 system_pods.go:89] "coredns-668d6bf9bc-5fp4f" [437226ae-e63d-4245-bbea-ad5c41ff9a93] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "etcd-multinode-140200" [50e84c56-5d78-4a51-bd63-4a724ccd5fd8] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kindnet-pv67r" [5f3d17bc-3df2-48f9-9840-641673243750] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kindnet-rnp2q" [e28e853b-b703-4a36-90d2-3af1a37e74e0] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kindnet-zkw9q" [123858da-6f70-4b10-b38e-bd930d21dbe4] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-apiserver-multinode-140200" [144753dc-c621-45f7-a94a-8b3835eebb12] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-controller-manager-multinode-140200" [a7c6e3bb-197c-434e-9f19-74d7e48b50de] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-proxy-2r7lj" [4892d703-fc43-4f67-8493-eaeae8c5e765] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-proxy-9rx2d" [2eaab25d-fe0b-4c48-ac6b-42095f5fbce6] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-proxy-kvg58" [ba8a332c-bb4a-4e9c-9a4e-2c578bdc99c1] Running
	I0407 14:25:17.635568    9664 system_pods.go:89] "kube-scheduler-multinode-140200" [88dfeee8-a3c1-485b-abfe-9eaf0057d6cf] Running
	I0407 14:25:17.635735    9664 system_pods.go:89] "storage-provisioner" [01df03d8-8816-480c-941b-180069d26997] Running
	I0407 14:25:17.635735    9664 system_pods.go:126] duration metric: took 200.8738ms to wait for k8s-apps to be running ...
	I0407 14:25:17.635735    9664 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 14:25:17.646094    9664 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:25:17.671388    9664 system_svc.go:56] duration metric: took 35.6528ms WaitForService to wait for kubelet
	I0407 14:25:17.671388    9664 kubeadm.go:582] duration metric: took 17.5573938s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 14:25:17.671555    9664 node_conditions.go:102] verifying NodePressure condition ...
	I0407 14:25:17.671673    9664 type.go:204] "Request Body" body=""
	I0407 14:25:17.826762    9664 request.go:661] Waited for 155.0305ms due to client-side throttling, not priority and fairness, request: GET:https://172.17.81.10:8443/api/v1/nodes
	I0407 14:25:17.826762    9664 round_trippers.go:470] GET https://172.17.81.10:8443/api/v1/nodes
	I0407 14:25:17.826762    9664 round_trippers.go:476] Request Headers:
	I0407 14:25:17.826762    9664 round_trippers.go:480]     Accept: application/vnd.kubernetes.protobuf,application/json
	I0407 14:25:17.826762    9664 round_trippers.go:480]     User-Agent: minikube-windows-amd64.exe/v0.0.0 (windows/amd64) kubernetes/$Format
	I0407 14:25:17.831193    9664 round_trippers.go:581] Response Status: 200 OK in 4 milliseconds
	I0407 14:25:17.831193    9664 round_trippers.go:584] Response Headers:
	I0407 14:25:17.831273    9664 round_trippers.go:587]     X-Kubernetes-Pf-Prioritylevel-Uid: b2561c7d-41ae-40fb-9553-e39acd4eeee0
	I0407 14:25:17.831273    9664 round_trippers.go:587]     Date: Mon, 07 Apr 2025 14:25:17 GMT
	I0407 14:25:17.831273    9664 round_trippers.go:587]     Audit-Id: 62a35013-42ff-4a6d-a636-cd50e63978db
	I0407 14:25:17.831273    9664 round_trippers.go:587]     Cache-Control: no-cache, private
	I0407 14:25:17.831273    9664 round_trippers.go:587]     Content-Type: application/vnd.kubernetes.protobuf
	I0407 14:25:17.831273    9664 round_trippers.go:587]     X-Kubernetes-Pf-Flowschema-Uid: 8374820d-65c4-41e1-a4a9-bde139648d45
	I0407 14:25:17.831920    9664 type.go:204] "Response Body" body=<
		00000000  6b 38 73 00 0a 0e 0a 02  76 31 12 08 4e 6f 64 65  |k8s.....v1..Node|
		00000010  4c 69 73 74 12 ea 5d 0a  0a 0a 00 12 04 32 30 32  |List..]......202|
		00000020  30 1a 00 12 d2 24 0a f8  11 0a 10 6d 75 6c 74 69  |0....$.....multi|
		00000030  6e 6f 64 65 2d 31 34 30  32 30 30 12 00 1a 00 22  |node-140200...."|
		00000040  00 2a 24 31 66 35 33 62  34 63 64 2d 61 62 30 31  |.*$1f53b4cd-ab01|
		00000050  2d 34 32 63 61 2d 61 36  61 36 2d 61 39 33 65 66  |-42ca-a6a6-a93ef|
		00000060  63 39 62 64 34 64 66 32  04 32 30 31 39 38 00 42  |c9bd4df2.20198.B|
		00000070  08 08 dd b4 cf bf 06 10  00 5a 20 0a 17 62 65 74  |.........Z ..bet|
		00000080  61 2e 6b 75 62 65 72 6e  65 74 65 73 2e 69 6f 2f  |a.kubernetes.io/|
		00000090  61 72 63 68 12 05 61 6d  64 36 34 5a 1e 0a 15 62  |arch..amd64Z...b|
		000000a0  65 74 61 2e 6b 75 62 65  72 6e 65 74 65 73 2e 69  |eta.kubernetes.i|
		000000b0  6f 2f 6f 73 12 05 6c 69  6e 75 78 5a 1b 0a 12 6b  |o/os..linuxZ...k|
		000000c0  75 62 65 72 6e 65 74 65  73 2e 69 6f 2f 61 72 63  |ubernetes.io/ar [truncated 58452 chars]
	 >
	I0407 14:25:17.832282    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:25:17.832381    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:25:17.832430    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:25:17.832430    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:25:17.832430    9664 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 14:25:17.832430    9664 node_conditions.go:123] node cpu capacity is 2
	I0407 14:25:17.832430    9664 node_conditions.go:105] duration metric: took 160.8737ms to run NodePressure ...
	I0407 14:25:17.832430    9664 start.go:241] waiting for startup goroutines ...
	I0407 14:25:17.832430    9664 start.go:246] waiting for cluster config update ...
	I0407 14:25:17.832430    9664 start.go:255] writing updated cluster config ...
	I0407 14:25:17.839229    9664 out.go:201] 
	I0407 14:25:17.842142    9664 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:25:17.851140    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:25:17.851770    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:25:17.856976    9664 out.go:177] * Starting "multinode-140200-m02" worker node in "multinode-140200" cluster
	I0407 14:25:17.862270    9664 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 14:25:17.862326    9664 cache.go:56] Caching tarball of preloaded images
	I0407 14:25:17.862326    9664 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 14:25:17.862856    9664 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 14:25:17.863010    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:25:17.865055    9664 start.go:360] acquireMachinesLock for multinode-140200-m02: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:25:17.865055    9664 start.go:364] duration metric: took 0s to acquireMachinesLock for "multinode-140200-m02"
	I0407 14:25:17.865055    9664 start.go:96] Skipping create...Using existing machine configuration
	I0407 14:25:17.865055    9664 fix.go:54] fixHost starting: m02
	I0407 14:25:17.865645    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:20.028277    9664 main.go:141] libmachine: [stdout =====>] : Off
	
	I0407 14:25:20.028662    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:20.028662    9664 fix.go:112] recreateIfNeeded on multinode-140200-m02: state=Stopped err=<nil>
	W0407 14:25:20.028662    9664 fix.go:138] unexpected machine state, will restart: <nil>
	I0407 14:25:20.034137    9664 out.go:177] * Restarting existing hyperv VM for "multinode-140200-m02" ...
	I0407 14:25:20.037084    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM multinode-140200-m02
	I0407 14:25:23.246002    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:23.246002    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:23.246220    9664 main.go:141] libmachine: Waiting for host to start...
	I0407 14:25:23.246292    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:25.584834    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:25.585187    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:25.585187    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:28.291683    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:28.291683    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:29.291954    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:31.624070    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:31.624667    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:31.624667    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:34.308393    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:34.308393    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:35.308680    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:37.621521    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:37.621595    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:37.621712    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:40.232478    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:40.232478    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:41.232716    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:43.542524    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:43.542524    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:43.542524    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:46.269896    9664 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:25:46.269896    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:47.270760    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:49.632029    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:49.632029    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:49.632029    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:52.304497    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:25:52.304497    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:52.307323    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:54.500423    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:54.500797    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:54.500896    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:25:57.114452    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:25:57.114452    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:57.115120    9664 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\multinode-140200\config.json ...
	I0407 14:25:57.117770    9664 machine.go:93] provisionDockerMachine start ...
	I0407 14:25:57.117862    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:25:59.317057    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:25:59.317904    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:25:59.318043    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:01.927243    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:01.927243    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:01.932898    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:01.933112    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:01.933112    9664 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:26:02.068421    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:26:02.068562    9664 buildroot.go:166] provisioning hostname "multinode-140200-m02"
	I0407 14:26:02.068621    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:04.282526    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:04.283554    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:04.283554    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:06.896641    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:06.896641    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:06.903250    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:06.904263    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:06.904263    9664 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-140200-m02 && echo "multinode-140200-m02" | sudo tee /etc/hostname
	I0407 14:26:07.069029    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-140200-m02
	
	I0407 14:26:07.069135    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:09.269214    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:09.269495    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:09.269627    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:11.913528    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:11.914223    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:11.920702    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:11.921204    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:11.921278    9664 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-140200-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-140200-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-140200-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:26:12.080192    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:26:12.080192    9664 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 14:26:12.080192    9664 buildroot.go:174] setting up certificates
	I0407 14:26:12.080192    9664 provision.go:84] configureAuth start
	I0407 14:26:12.080192    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:14.284353    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:14.285375    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:14.285476    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:16.969626    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:16.969626    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:16.970490    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:19.153196    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:19.153382    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:19.153505    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:21.757405    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:21.757549    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:21.757549    9664 provision.go:143] copyHostCerts
	I0407 14:26:21.757810    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem
	I0407 14:26:21.758141    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 14:26:21.758141    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 14:26:21.758310    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 14:26:21.760019    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem
	I0407 14:26:21.760328    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 14:26:21.760328    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 14:26:21.760328    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 14:26:21.761670    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem
	I0407 14:26:21.761770    9664 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 14:26:21.761770    9664 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 14:26:21.762318    9664 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 14:26:21.763423    9664 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.multinode-140200-m02 san=[127.0.0.1 172.17.88.68 localhost minikube multinode-140200-m02]
	I0407 14:26:21.947726    9664 provision.go:177] copyRemoteCerts
	I0407 14:26:21.958973    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:26:21.958973    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:24.170550    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:24.170550    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:24.170640    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:26.787352    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:26.787352    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:26.787997    9664 sshutil.go:53] new ssh client: &{IP:172.17.88.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\id_rsa Username:docker}
	I0407 14:26:26.903318    9664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (4.9442152s)
	I0407 14:26:26.903368    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem -> /etc/docker/ca.pem
	I0407 14:26:26.903961    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:26:26.952672    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem -> /etc/docker/server.pem
	I0407 14:26:26.952953    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1229 bytes)
	I0407 14:26:26.997094    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem -> /etc/docker/server-key.pem
	I0407 14:26:26.997523    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 14:26:27.048237    9664 provision.go:87] duration metric: took 14.9679314s to configureAuth
	I0407 14:26:27.048237    9664 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:26:27.048903    9664 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:26:27.048903    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:29.234349    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:29.235116    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:29.235188    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:31.857301    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:31.857380    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:31.863793    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:31.864368    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:31.864368    9664 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 14:26:32.005233    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 14:26:32.005233    9664 buildroot.go:70] root file system type: tmpfs
	I0407 14:26:32.005421    9664 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 14:26:32.005421    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:34.270407    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:34.270407    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:34.271415    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:36.873161    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:36.874366    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:36.879873    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:36.880594    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:36.880594    9664 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=172.17.81.10"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 14:26:37.054659    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=172.17.81.10
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 14:26:37.054729    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:39.241067    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:39.241067    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:39.241424    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:41.919824    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:41.919824    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:41.925150    9664 main.go:141] libmachine: Using SSH client type: native
	I0407 14:26:41.925798    9664 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.88.68 22 <nil> <nil>}
	I0407 14:26:41.925798    9664 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 14:26:44.319046    9664 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 14:26:44.319114    9664 machine.go:96] duration metric: took 47.200985s to provisionDockerMachine
	I0407 14:26:44.319114    9664 start.go:293] postStartSetup for "multinode-140200-m02" (driver="hyperv")
	I0407 14:26:44.319172    9664 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:26:44.330387    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:26:44.330387    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:46.531384    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:46.531384    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:46.531585    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:26:49.174136    9664 main.go:141] libmachine: [stdout =====>] : 172.17.88.68
	
	I0407 14:26:49.174800    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:49.174977    9664 sshutil.go:53] new ssh client: &{IP:172.17.88.68 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\id_rsa Username:docker}
	I0407 14:26:49.293081    9664 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (4.9626567s)
	I0407 14:26:49.305350    9664 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:26:49.312168    9664 command_runner.go:130] > NAME=Buildroot
	I0407 14:26:49.312409    9664 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0407 14:26:49.312409    9664 command_runner.go:130] > ID=buildroot
	I0407 14:26:49.312409    9664 command_runner.go:130] > VERSION_ID=2023.02.9
	I0407 14:26:49.312409    9664 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0407 14:26:49.312502    9664 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:26:49.312521    9664 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 14:26:49.312909    9664 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 14:26:49.313859    9664 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 14:26:49.313859    9664 vm_assets.go:164] NewFileAsset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> /etc/ssl/certs/77282.pem
	I0407 14:26:49.324853    9664 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:26:49.343448    9664 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 14:26:49.395188    9664 start.go:296] duration metric: took 5.0760354s for postStartSetup
	I0407 14:26:49.395188    9664 fix.go:56] duration metric: took 1m31.5294377s for fixHost
	I0407 14:26:49.395188    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:26:51.683947    9664 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:26:51.684128    9664 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:26:51.684128    9664 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	
	
	==> Docker <==
	Apr 07 14:25:11 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:11.668841967Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:25:11 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:11.668967664Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:25:11 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:11.669175660Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:25:11 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:11.713385610Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 14:25:11 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:11.713478307Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:25:11 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:11.713500307Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:25:11 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:11.713647203Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:25:11 multinode-140200 cri-dockerd[1374]: time="2025-04-07T14:25:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/0c21e794e9c04eddeeeb09893ce2537bdda3ec1b344068dee8726b556dd5420f/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 14:25:11 multinode-140200 cri-dockerd[1374]: time="2025-04-07T14:25:11Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b65e33162b2bfac3363effe09aa37f6852e0fb1ac9d057367d982684f0c76c73/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 07 14:25:12 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:12.233853982Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 14:25:12 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:12.233997485Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:25:12 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:12.234317391Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:25:12 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:12.234489995Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:25:12 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:12.287310249Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 14:25:12 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:12.288217767Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:25:12 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:12.288294869Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:25:12 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:12.288929482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:25:27 multinode-140200 dockerd[1094]: time="2025-04-07T14:25:27.303937753Z" level=info msg="ignoring event" container=669cf4e7d29f662a3b9693cd321bc677d9c57862a930ee830293759d0f4ebc58 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Apr 07 14:25:27 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:27.304791559Z" level=info msg="shim disconnected" id=669cf4e7d29f662a3b9693cd321bc677d9c57862a930ee830293759d0f4ebc58 namespace=moby
	Apr 07 14:25:27 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:27.304881460Z" level=warning msg="cleaning up after shim disconnected" id=669cf4e7d29f662a3b9693cd321bc677d9c57862a930ee830293759d0f4ebc58 namespace=moby
	Apr 07 14:25:27 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:27.304904060Z" level=info msg="cleaning up dead shim" namespace=moby
	Apr 07 14:25:41 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:41.695019747Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 14:25:41 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:41.695272249Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 14:25:41 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:41.695323950Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 14:25:41 multinode-140200 dockerd[1102]: time="2025-04-07T14:25:41.696417758Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a829a54b42be1       6e38f40d628db                                                                                         About a minute ago   Running             storage-provisioner       2                   1ee27873c3803       storage-provisioner
	f2d0c7c65742b       8c811b4aec35f                                                                                         2 minutes ago        Running             busybox                   1                   b65e33162b2bf       busybox-58667487b6-kt4sh
	9c3959cca70d4       c69fa2e9cbf5f                                                                                         2 minutes ago        Running             coredns                   1                   0c21e794e9c04       coredns-668d6bf9bc-5fp4f
	8bbaad0bf71ff       df3849d954c98                                                                                         2 minutes ago        Running             kindnet-cni               1                   b67415111fbfe       kindnet-zkw9q
	669cf4e7d29f6       6e38f40d628db                                                                                         2 minutes ago        Exited              storage-provisioner       1                   1ee27873c3803       storage-provisioner
	f572021ea7e6b       f1332858868e1                                                                                         2 minutes ago        Running             kube-proxy                1                   321e09b7dcd93       kube-proxy-9rx2d
	2ce13ce85209f       a9e7e6b294baf                                                                                         2 minutes ago        Running             etcd                      0                   96d49b3fdcd6f       etcd-multinode-140200
	00cba5f390383       85b7a174738ba                                                                                         2 minutes ago        Running             kube-apiserver            0                   484a57aed82ac       kube-apiserver-multinode-140200
	2d6ff13054b7b       d8e673e7c9983                                                                                         2 minutes ago        Running             kube-scheduler            1                   32f271567527a       kube-scheduler-multinode-140200
	59dea78fdf549       b6a454c5a800d                                                                                         2 minutes ago        Running             kube-controller-manager   1                   c9b2249f26eb5       kube-controller-manager-multinode-140200
	016ef6290457d       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   23 minutes ago       Exited              busybox                   0                   732344eba89ab       busybox-58667487b6-kt4sh
	b2d29d6fc7748       c69fa2e9cbf5f                                                                                         26 minutes ago       Exited              coredns                   0                   47eb0b16ce1df       coredns-668d6bf9bc-5fp4f
	2a1208136f157       kindest/kindnetd@sha256:e3c42406b0806c1f7e8a66838377936cbd2cdfd94d9b26a3eefedada8713d495              27 minutes ago       Exited              kindnet-cni               0                   0d317e51cbf8d       kindnet-zkw9q
	ec26042b52719       f1332858868e1                                                                                         27 minutes ago       Exited              kube-proxy                0                   728d07c29084b       kube-proxy-9rx2d
	8c615c7e05066       b6a454c5a800d                                                                                         27 minutes ago       Exited              kube-controller-manager   0                   8bd2f8fc3a28f       kube-controller-manager-multinode-140200
	159f6e03fef6f       d8e673e7c9983                                                                                         27 minutes ago       Exited              kube-scheduler            0                   d7cc037737938       kube-scheduler-multinode-140200
	
	
	==> coredns [9c3959cca70d] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 52f38634f47d27a60a843ea08b564c25eb754b24bbf06ec66f8366b52e126543ce16cee7cc062958162af0c89604123ac00e3f032b67ea2f0f7eb90c30818844
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:48356 - 26555 "HINFO IN 1700425050268607742.2469080280097989198. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.052577649s
	
	
	==> coredns [b2d29d6fc774] <==
	[INFO] 10.244.1.2:33319 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.0000803s
	[INFO] 10.244.1.2:34592 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000170002s
	[INFO] 10.244.1.2:36193 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000152002s
	[INFO] 10.244.1.2:57995 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,aa,rd,ra 111 0.000169302s
	[INFO] 10.244.1.2:52780 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000165902s
	[INFO] 10.244.1.2:42893 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000260902s
	[INFO] 10.244.1.2:60152 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000182002s
	[INFO] 10.244.0.3:48264 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000272902s
	[INFO] 10.244.0.3:59185 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000149502s
	[INFO] 10.244.0.3:57040 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000159402s
	[INFO] 10.244.0.3:52459 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000149602s
	[INFO] 10.244.1.2:57811 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000113901s
	[INFO] 10.244.1.2:40249 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000280403s
	[INFO] 10.244.1.2:34055 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000145801s
	[INFO] 10.244.1.2:43241 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000161602s
	[INFO] 10.244.0.3:46342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000125401s
	[INFO] 10.244.0.3:42268 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.001186212s
	[INFO] 10.244.0.3:33339 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000125601s
	[INFO] 10.244.0.3:34226 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000112501s
	[INFO] 10.244.1.2:43833 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000279102s
	[INFO] 10.244.1.2:46614 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000189202s
	[INFO] 10.244.1.2:52638 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000225402s
	[INFO] 10.244.1.2:40399 - 5 "PTR IN 1.80.17.172.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000151102s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               multinode-140200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-140200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=multinode-140200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T14_00_01_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 13:59:57 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-140200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 14:27:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 14:25:14 +0000   Mon, 07 Apr 2025 13:59:53 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 14:25:14 +0000   Mon, 07 Apr 2025 13:59:53 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 14:25:14 +0000   Mon, 07 Apr 2025 13:59:53 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 14:25:14 +0000   Mon, 07 Apr 2025 14:25:14 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.81.10
	  Hostname:    multinode-140200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 2f25d6c0a49b4c0992466a9a6b06198e
	  System UUID:                25cd271c-0dd5-b642-826d-3f80486d9e38
	  Boot ID:                    b7fc80df-b56b-4843-a1b3-14b9d168284e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-kt4sh                    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 coredns-668d6bf9bc-5fp4f                    100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     27m
	  kube-system                 etcd-multinode-140200                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         2m25s
	  kube-system                 kindnet-zkw9q                               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      27m
	  kube-system                 kube-apiserver-multinode-140200             250m (12%)    0 (0%)      0 (0%)           0 (0%)         2m25s
	  kube-system                 kube-controller-manager-multinode-140200    200m (10%)    0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-proxy-9rx2d                            0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 kube-scheduler-multinode-140200             100m (5%)     0 (0%)      0 (0%)           0 (0%)         27m
	  kube-system                 storage-provisioner                         0 (0%)        0 (0%)      0 (0%)           0 (0%)         27m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   100m (5%)
	  memory             220Mi (10%)  220Mi (10%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 27m                    kube-proxy       
	  Normal   Starting                 2m22s                  kube-proxy       
	  Normal   NodeHasSufficientMemory  27m (x8 over 27m)      kubelet          Node multinode-140200 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m (x8 over 27m)      kubelet          Node multinode-140200 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     27m (x7 over 27m)      kubelet          Node multinode-140200 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     27m                    kubelet          Node multinode-140200 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  27m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  27m                    kubelet          Node multinode-140200 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    27m                    kubelet          Node multinode-140200 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 27m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           27m                    node-controller  Node multinode-140200 event: Registered Node multinode-140200 in Controller
	  Normal   NodeReady                26m                    kubelet          Node multinode-140200 status is now: NodeReady
	  Normal   Starting                 2m31s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  2m31s (x8 over 2m31s)  kubelet          Node multinode-140200 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    2m31s (x8 over 2m31s)  kubelet          Node multinode-140200 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     2m31s (x7 over 2m31s)  kubelet          Node multinode-140200 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  2m31s                  kubelet          Updated Node Allocatable limit across pods
	  Warning  Rebooted                 2m26s                  kubelet          Node multinode-140200 has been rebooted, boot id: b7fc80df-b56b-4843-a1b3-14b9d168284e
	  Normal   RegisteredNode           2m23s                  node-controller  Node multinode-140200 event: Registered Node multinode-140200 in Controller
	
	
	Name:               multinode-140200-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-140200-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=multinode-140200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_07T14_03_12_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 14:03:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-140200-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 14:21:26 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Apr 2025 14:20:55 +0000   Mon, 07 Apr 2025 14:25:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Apr 2025 14:20:55 +0000   Mon, 07 Apr 2025 14:25:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Apr 2025 14:20:55 +0000   Mon, 07 Apr 2025 14:25:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Apr 2025 14:20:55 +0000   Mon, 07 Apr 2025 14:25:47 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.82.40
	  Hostname:    multinode-140200-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 fcc15e3b5c0e48e2bb4f0703dca46560
	  System UUID:                f00434e9-33d2-e941-923c-7dd3ed460cdb
	  Boot ID:                    7b86de7f-44f1-42b1-bf68-d7c8427db9b7
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-58667487b6-vgl84    0 (0%)        0 (0%)      0 (0%)           0 (0%)         23m
	  kube-system                 kindnet-pv67r               100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      24m
	  kube-system                 kube-proxy-2r7lj            0 (0%)        0 (0%)      0 (0%)           0 (0%)         24m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  NodeHasSufficientMemory  24m (x2 over 24m)  kubelet          Node multinode-140200-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    24m (x2 over 24m)  kubelet          Node multinode-140200-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     24m (x2 over 24m)  kubelet          Node multinode-140200-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  24m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           24m                node-controller  Node multinode-140200-m02 event: Registered Node multinode-140200-m02 in Controller
	  Normal  NodeReady                23m                kubelet          Node multinode-140200-m02 status is now: NodeReady
	  Normal  RegisteredNode           2m23s              node-controller  Node multinode-140200-m02 event: Registered Node multinode-140200-m02 in Controller
	  Normal  NodeNotReady             93s                node-controller  Node multinode-140200-m02 status is now: NodeNotReady
	
	
	Name:               multinode-140200-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-140200-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=multinode-140200
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2025_04_07T14_19_54_0700
	                    minikube.k8s.io/version=v1.35.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 14:19:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-140200-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 14:21:05 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Apr 2025 14:20:10 +0000   Mon, 07 Apr 2025 14:21:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Apr 2025 14:20:10 +0000   Mon, 07 Apr 2025 14:21:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Apr 2025 14:20:10 +0000   Mon, 07 Apr 2025 14:21:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Apr 2025 14:20:10 +0000   Mon, 07 Apr 2025 14:21:55 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.83.62
	  Hostname:    multinode-140200-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 88fa2f749e384998b91def2acd86c817
	  System UUID:                142f4eb4-7290-a546-ae18-1c740de87b3c
	  Boot ID:                    e0d3c02f-823a-497e-be8c-83890a420b48
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-rnp2q       100m (5%)     100m (5%)   50Mi (2%)        50Mi (2%)      19m
	  kube-system                 kube-proxy-kvg58    0 (0%)        0 (0%)      0 (0%)           0 (0%)         19m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%)  100m (5%)
	  memory             50Mi (2%)  50Mi (2%)
	  ephemeral-storage  0 (0%)     0 (0%)
	  hugepages-2Mi      0 (0%)     0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 19m                    kube-proxy       
	  Normal  Starting                 7m23s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  19m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  19m (x2 over 19m)      kubelet          Node multinode-140200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    19m (x2 over 19m)      kubelet          Node multinode-140200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     19m (x2 over 19m)      kubelet          Node multinode-140200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                18m                    kubelet          Node multinode-140200-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  7m27s (x2 over 7m27s)  kubelet          Node multinode-140200-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m27s (x2 over 7m27s)  kubelet          Node multinode-140200-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m27s (x2 over 7m27s)  kubelet          Node multinode-140200-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m27s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m25s                  node-controller  Node multinode-140200-m03 event: Registered Node multinode-140200-m03 in Controller
	  Normal  NodeReady                7m10s                  kubelet          Node multinode-140200-m03 status is now: NodeReady
	  Normal  NodeNotReady             5m25s                  node-controller  Node multinode-140200-m03 status is now: NodeNotReady
	  Normal  RegisteredNode           2m23s                  node-controller  Node multinode-140200-m03 event: Registered Node multinode-140200-m03 in Controller
	
	
	==> dmesg <==
	[Apr 7 14:23] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +1.311409] psmouse serio1: trackpoint: failed to get extended button data, assuming 3 buttons
	[  +1.391863] systemd-fstab-generator[115]: Ignoring "noauto" option for root device
	[  +7.245302] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[Apr 7 14:24] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.171978] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[ +28.454443] systemd-fstab-generator[1018]: Ignoring "noauto" option for root device
	[  +0.115298] kauditd_printk_skb: 75 callbacks suppressed
	[  +0.564018] systemd-fstab-generator[1060]: Ignoring "noauto" option for root device
	[  +0.193248] systemd-fstab-generator[1072]: Ignoring "noauto" option for root device
	[  +0.236611] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +3.008635] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.197482] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[  +0.192674] systemd-fstab-generator[1351]: Ignoring "noauto" option for root device
	[  +0.262756] systemd-fstab-generator[1366]: Ignoring "noauto" option for root device
	[  +0.903490] systemd-fstab-generator[1491]: Ignoring "noauto" option for root device
	[  +0.109825] kauditd_printk_skb: 206 callbacks suppressed
	[  +4.244099] systemd-fstab-generator[1642]: Ignoring "noauto" option for root device
	[  +1.263157] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.809093] kauditd_printk_skb: 30 callbacks suppressed
	[  +4.098125] systemd-fstab-generator[2523]: Ignoring "noauto" option for root device
	[Apr 7 14:25] kauditd_printk_skb: 70 callbacks suppressed
	[ +15.596391] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2ce13ce85209] <==
	{"level":"info","ts":"2025-04-07T14:24:51.523901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 switched to configuration voters=(6249078519545032144)"}
	{"level":"info","ts":"2025-04-07T14:24:51.524249Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"cc5f18ba5e9dce7b","local-member-id":"56b92fcdf3016dd0","added-peer-id":"56b92fcdf3016dd0","added-peer-peer-urls":["https://172.17.92.89:2380"]}
	{"level":"info","ts":"2025-04-07T14:24:51.529270Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"cc5f18ba5e9dce7b","local-member-id":"56b92fcdf3016dd0","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T14:24:51.529615Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T14:24:51.530103Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T14:24:51.530786Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"56b92fcdf3016dd0","initial-advertise-peer-urls":["https://172.17.81.10:2380"],"listen-peer-urls":["https://172.17.81.10:2380"],"advertise-client-urls":["https://172.17.81.10:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.81.10:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T14:24:51.534263Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.17.81.10:2380"}
	{"level":"info","ts":"2025-04-07T14:24:51.539731Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.17.81.10:2380"}
	{"level":"info","ts":"2025-04-07T14:24:51.534002Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T14:24:52.667114Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-07T14:24:52.667518Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-07T14:24:52.667740Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 received MsgPreVoteResp from 56b92fcdf3016dd0 at term 2"}
	{"level":"info","ts":"2025-04-07T14:24:52.667901Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 became candidate at term 3"}
	{"level":"info","ts":"2025-04-07T14:24:52.668006Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 received MsgVoteResp from 56b92fcdf3016dd0 at term 3"}
	{"level":"info","ts":"2025-04-07T14:24:52.668389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"56b92fcdf3016dd0 became leader at term 3"}
	{"level":"info","ts":"2025-04-07T14:24:52.668670Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 56b92fcdf3016dd0 elected leader 56b92fcdf3016dd0 at term 3"}
	{"level":"info","ts":"2025-04-07T14:24:52.674916Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:24:52.683961Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:24:52.688099Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T14:24:52.688362Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T14:24:52.685120Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T14:24:52.674884Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"56b92fcdf3016dd0","local-member-attributes":"{Name:multinode-140200 ClientURLs:[https://172.17.81.10:2379]}","request-path":"/0/members/56b92fcdf3016dd0/attributes","cluster-id":"cc5f18ba5e9dce7b","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T14:24:52.692504Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.81.10:2379"}
	{"level":"info","ts":"2025-04-07T14:24:52.694783Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T14:24:52.695698Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 14:27:20 up 4 min,  0 users,  load average: 0.23, 0.22, 0.09
	Linux multinode-140200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2a1208136f15] <==
	I0407 14:21:25.726767       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:21:35.733941       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:21:35.734076       1 main.go:301] handling current node
	I0407 14:21:35.734110       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:21:35.734119       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:21:35.735061       1 main.go:297] Handling node with IPs: map[172.17.83.62:{}]
	I0407 14:21:35.735102       1 main.go:324] Node multinode-140200-m03 has CIDR [10.244.3.0/24] 
	I0407 14:21:45.724895       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:21:45.725041       1 main.go:301] handling current node
	I0407 14:21:45.725064       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:21:45.725072       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:21:45.725504       1 main.go:297] Handling node with IPs: map[172.17.83.62:{}]
	I0407 14:21:45.725542       1 main.go:324] Node multinode-140200-m03 has CIDR [10.244.3.0/24] 
	I0407 14:21:55.726870       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:21:55.726926       1 main.go:301] handling current node
	I0407 14:21:55.726946       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:21:55.726954       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:21:55.727148       1 main.go:297] Handling node with IPs: map[172.17.83.62:{}]
	I0407 14:21:55.727161       1 main.go:324] Node multinode-140200-m03 has CIDR [10.244.3.0/24] 
	I0407 14:22:05.728407       1 main.go:297] Handling node with IPs: map[172.17.92.89:{}]
	I0407 14:22:05.728470       1 main.go:301] handling current node
	I0407 14:22:05.728497       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:22:05.728573       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:22:05.728809       1 main.go:297] Handling node with IPs: map[172.17.83.62:{}]
	I0407 14:22:05.728843       1 main.go:324] Node multinode-140200-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [8bbaad0bf71f] <==
	I0407 14:26:38.759999       1 main.go:324] Node multinode-140200-m03 has CIDR [10.244.3.0/24] 
	I0407 14:26:48.757771       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:26:48.757922       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:26:48.759421       1 main.go:297] Handling node with IPs: map[172.17.83.62:{}]
	I0407 14:26:48.759650       1 main.go:324] Node multinode-140200-m03 has CIDR [10.244.3.0/24] 
	I0407 14:26:48.759988       1 main.go:297] Handling node with IPs: map[172.17.81.10:{}]
	I0407 14:26:48.760268       1 main.go:301] handling current node
	I0407 14:26:58.752380       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:26:58.752499       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:26:58.752934       1 main.go:297] Handling node with IPs: map[172.17.83.62:{}]
	I0407 14:26:58.752951       1 main.go:324] Node multinode-140200-m03 has CIDR [10.244.3.0/24] 
	I0407 14:26:58.753101       1 main.go:297] Handling node with IPs: map[172.17.81.10:{}]
	I0407 14:26:58.753114       1 main.go:301] handling current node
	I0407 14:27:08.760837       1 main.go:297] Handling node with IPs: map[172.17.81.10:{}]
	I0407 14:27:08.761006       1 main.go:301] handling current node
	I0407 14:27:08.761028       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:27:08.761038       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:27:08.761715       1 main.go:297] Handling node with IPs: map[172.17.83.62:{}]
	I0407 14:27:08.761802       1 main.go:324] Node multinode-140200-m03 has CIDR [10.244.3.0/24] 
	I0407 14:27:18.758123       1 main.go:297] Handling node with IPs: map[172.17.82.40:{}]
	I0407 14:27:18.758179       1 main.go:324] Node multinode-140200-m02 has CIDR [10.244.1.0/24] 
	I0407 14:27:18.758575       1 main.go:297] Handling node with IPs: map[172.17.83.62:{}]
	I0407 14:27:18.758709       1 main.go:324] Node multinode-140200-m03 has CIDR [10.244.3.0/24] 
	I0407 14:27:18.758971       1 main.go:297] Handling node with IPs: map[172.17.81.10:{}]
	I0407 14:27:18.759131       1 main.go:301] handling current node
	
	
	==> kube-apiserver [00cba5f39038] <==
	I0407 14:24:54.475703       1 autoregister_controller.go:144] Starting autoregister controller
	I0407 14:24:54.475865       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0407 14:24:54.476008       1 cache.go:39] Caches are synced for autoregister controller
	I0407 14:24:54.480755       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0407 14:24:54.480877       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0407 14:24:54.481189       1 shared_informer.go:320] Caches are synced for configmaps
	I0407 14:24:54.481419       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0407 14:24:54.494839       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0407 14:24:54.517465       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0407 14:24:54.518051       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0407 14:24:54.523694       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0407 14:24:54.529422       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0407 14:24:54.542250       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0407 14:24:54.542514       1 policy_source.go:240] refreshing policies
	I0407 14:24:54.607981       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 14:24:55.319166       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0407 14:24:55.536351       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	W0407 14:24:56.149885       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.81.10]
	I0407 14:24:56.153001       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 14:24:56.170995       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 14:24:57.876761       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 14:24:57.964879       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0407 14:24:58.162338       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 14:24:58.472388       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 14:24:58.489872       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [59dea78fdf54] <==
	I0407 14:24:57.800565       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-140200-m02"
	I0407 14:24:57.802029       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="multinode-140200"
	I0407 14:24:57.802572       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0407 14:24:57.804536       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:24:57.809960       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200"
	I0407 14:24:57.908589       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:24:57.977617       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="224.656602ms"
	I0407 14:24:57.979211       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.3µs"
	I0407 14:24:57.987648       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="228.002422ms"
	I0407 14:24:57.988582       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="235.601µs"
	I0407 14:25:07.898529       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200"
	I0407 14:25:12.961045       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="23.226964ms"
	I0407 14:25:12.961872       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="206.204µs"
	I0407 14:25:12.996279       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="119.203µs"
	I0407 14:25:13.040313       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="16.819225ms"
	I0407 14:25:13.040907       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="39.3µs"
	I0407 14:25:15.011843       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200"
	I0407 14:25:15.013819       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-140200-m02"
	I0407 14:25:15.034159       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200"
	I0407 14:25:17.835806       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200"
	I0407 14:25:47.867741       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:25:47.893577       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:25:47.965497       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="16.117718ms"
	I0407 14:25:47.965625       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-58667487b6" duration="34.4µs"
	I0407 14:25:53.072959       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	
	
	==> kube-controller-manager [8c615c7e0506] <==
	I0407 14:17:00.157775       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:17:00.185965       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:17:05.278770       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:18:30.602894       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200"
	I0407 14:19:41.389731       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:19:41.415859       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:19:47.097513       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-140200-m02"
	I0407 14:19:53.337364       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-140200-m02"
	I0407 14:19:53.339518       1 actual_state_of_world.go:541] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-140200-m03\" does not exist"
	I0407 14:19:53.381924       1 range_allocator.go:428] "Set node PodCIDR" logger="node-ipam-controller" node="multinode-140200-m03" podCIDRs=["10.244.3.0/24"]
	I0407 14:19:53.381977       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:19:53.382009       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:19:53.727434       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:19:54.317624       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:19:55.290020       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:20:03.446813       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:20:10.253845       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-140200-m02"
	I0407 14:20:10.254352       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:20:10.271799       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:20:15.267033       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:20:55.199066       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m02"
	I0407 14:21:55.392826       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="multinode-140200-m02"
	I0407 14:21:55.392923       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:21:55.489682       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	I0407 14:22:00.593374       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="multinode-140200-m03"
	
	
	==> kube-proxy [ec26042b5271] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 14:00:07.337754       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 14:00:07.466119       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.17.92.89"]
	E0407 14:00:07.466279       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 14:00:07.567557       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 14:00:07.567717       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 14:00:07.567756       1 server_linux.go:170] "Using iptables Proxier"
	I0407 14:00:07.574629       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 14:00:07.577367       1 server.go:497] "Version info" version="v1.32.2"
	I0407 14:00:07.577404       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:00:07.585487       1 config.go:199] "Starting service config controller"
	I0407 14:00:07.586284       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 14:00:07.586337       1 config.go:329] "Starting node config controller"
	I0407 14:00:07.586345       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 14:00:07.589540       1 config.go:105] "Starting endpoint slice config controller"
	I0407 14:00:07.589593       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 14:00:07.686785       1 shared_informer.go:320] Caches are synced for node config
	I0407 14:00:07.686825       1 shared_informer.go:320] Caches are synced for service config
	I0407 14:00:07.694325       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-proxy [f572021ea7e6] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 14:24:57.747447       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 14:24:57.901868       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.17.81.10"]
	E0407 14:24:57.903343       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 14:24:58.047533       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 14:24:58.047675       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 14:24:58.047725       1 server_linux.go:170] "Using iptables Proxier"
	I0407 14:24:58.062776       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 14:24:58.068444       1 server.go:497] "Version info" version="v1.32.2"
	I0407 14:24:58.068815       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:24:58.078611       1 config.go:199] "Starting service config controller"
	I0407 14:24:58.081687       1 config.go:105] "Starting endpoint slice config controller"
	I0407 14:24:58.081703       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 14:24:58.082055       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 14:24:58.082154       1 config.go:329] "Starting node config controller"
	I0407 14:24:58.082272       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 14:24:58.181862       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 14:24:58.182356       1 shared_informer.go:320] Caches are synced for service config
	I0407 14:24:58.182651       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [159f6e03fef6] <==
	W0407 13:59:58.496079       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 13:59:58.496276       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 13:59:58.551175       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0407 13:59:58.551286       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User \"system:kube-scheduler\" cannot list resource \"poddisruptionbudgets\" in API group \"policy\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.610381       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:58.610476       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.627893       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:58.628177       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.719927       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:58.720228       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.814245       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:58.814720       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.940493       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 13:59:58.940937       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:58.976373       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0407 13:59:58.976407       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:59.037635       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 13:59:59.038094       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 13:59:59.038018       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 13:59:59.038595       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0407 14:00:00.428814       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 14:22:11.890784       1 configmap_cafile_content.go:226] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 14:22:11.894927       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0407 14:22:11.895067       1 tlsconfig.go:258] "Shutting down DynamicServingCertificateController"
	E0407 14:22:11.956777       1 run.go:72] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [2d6ff13054b7] <==
	I0407 14:24:52.314455       1 serving.go:386] Generated self-signed cert in-memory
	W0407 14:24:54.384821       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 14:24:54.384869       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 14:24:54.384883       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 14:24:54.384909       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 14:24:54.472025       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 14:24:54.475555       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 14:24:54.482684       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 14:24:54.490921       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 14:24:54.491177       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 14:24:54.491860       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 14:24:54.592339       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 07 14:25:03 multinode-140200 kubelet[1649]: E0407 14:25:03.174002    1649 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/dcbaa934-5251-4179-bd5e-60d5d2ba403b-kube-api-access-cnn6l podName:dcbaa934-5251-4179-bd5e-60d5d2ba403b nodeName:}" failed. No retries permitted until 2025-04-07 14:25:11.17398408 +0000 UTC m=+21.966399846 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cnn6l" (UniqueName: "kubernetes.io/projected/dcbaa934-5251-4179-bd5e-60d5d2ba403b-kube-api-access-cnn6l") pod "busybox-58667487b6-kt4sh" (UID: "dcbaa934-5251-4179-bd5e-60d5d2ba403b") : object "default"/"kube-root-ca.crt" not registered
	Apr 07 14:25:03 multinode-140200 kubelet[1649]: E0407 14:25:03.500128    1649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-5fp4f" podUID="437226ae-e63d-4245-bbea-ad5c41ff9a93"
	Apr 07 14:25:04 multinode-140200 kubelet[1649]: E0407 14:25:04.478725    1649 kubelet.go:3008] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized"
	Apr 07 14:25:04 multinode-140200 kubelet[1649]: E0407 14:25:04.500355    1649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-kt4sh" podUID="dcbaa934-5251-4179-bd5e-60d5d2ba403b"
	Apr 07 14:25:05 multinode-140200 kubelet[1649]: E0407 14:25:05.500265    1649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-5fp4f" podUID="437226ae-e63d-4245-bbea-ad5c41ff9a93"
	Apr 07 14:25:06 multinode-140200 kubelet[1649]: E0407 14:25:06.500750    1649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-kt4sh" podUID="dcbaa934-5251-4179-bd5e-60d5d2ba403b"
	Apr 07 14:25:07 multinode-140200 kubelet[1649]: E0407 14:25:07.501158    1649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="kube-system/coredns-668d6bf9bc-5fp4f" podUID="437226ae-e63d-4245-bbea-ad5c41ff9a93"
	Apr 07 14:25:08 multinode-140200 kubelet[1649]: E0407 14:25:08.500741    1649 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized" pod="default/busybox-58667487b6-kt4sh" podUID="dcbaa934-5251-4179-bd5e-60d5d2ba403b"
	Apr 07 14:25:11 multinode-140200 kubelet[1649]: I0407 14:25:11.885287    1649 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0c21e794e9c04eddeeeb09893ce2537bdda3ec1b344068dee8726b556dd5420f"
	Apr 07 14:25:28 multinode-140200 kubelet[1649]: I0407 14:25:28.191548    1649 scope.go:117] "RemoveContainer" containerID="1e0d3f9a0f2174bb8cd8c290542ef69078fed0a1d20cdf15b40a0c8a022be52f"
	Apr 07 14:25:28 multinode-140200 kubelet[1649]: I0407 14:25:28.192183    1649 scope.go:117] "RemoveContainer" containerID="669cf4e7d29f662a3b9693cd321bc677d9c57862a930ee830293759d0f4ebc58"
	Apr 07 14:25:28 multinode-140200 kubelet[1649]: E0407 14:25:28.192409    1649 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(01df03d8-8816-480c-941b-180069d26997)\"" pod="kube-system/storage-provisioner" podUID="01df03d8-8816-480c-941b-180069d26997"
	Apr 07 14:25:41 multinode-140200 kubelet[1649]: I0407 14:25:41.501052    1649 scope.go:117] "RemoveContainer" containerID="669cf4e7d29f662a3b9693cd321bc677d9c57862a930ee830293759d0f4ebc58"
	Apr 07 14:25:49 multinode-140200 kubelet[1649]: I0407 14:25:49.496614    1649 scope.go:117] "RemoveContainer" containerID="783fd069538d1d17776cad789518464b699a00dd84ad4a3b5432058456a274ae"
	Apr 07 14:25:49 multinode-140200 kubelet[1649]: E0407 14:25:49.540626    1649 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 14:25:49 multinode-140200 kubelet[1649]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 14:25:49 multinode-140200 kubelet[1649]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 14:25:49 multinode-140200 kubelet[1649]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 14:25:49 multinode-140200 kubelet[1649]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 07 14:25:49 multinode-140200 kubelet[1649]: I0407 14:25:49.568106    1649 scope.go:117] "RemoveContainer" containerID="92c49129b5b09e823bfa793bdb548c6a946f771f090a6378c8a519aac86f3cc8"
	Apr 07 14:26:49 multinode-140200 kubelet[1649]: E0407 14:26:49.529456    1649 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 07 14:26:49 multinode-140200 kubelet[1649]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 07 14:26:49 multinode-140200 kubelet[1649]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 07 14:26:49 multinode-140200 kubelet[1649]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 07 14:26:49 multinode-140200 kubelet[1649]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-140200 -n multinode-140200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p multinode-140200 -n multinode-140200: (12.9303493s)
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-140200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (405.86s)

                                                
                                    
x
+
TestKubernetesUpgrade (1297.29s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-003200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-003200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: exit status 90 (6m17.7368663s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-003200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "kubernetes-upgrade-003200" primary control-plane node in "kubernetes-upgrade-003200" cluster
	* Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 14:49:07.347951   13792 out.go:345] Setting OutFile to fd 1392 ...
	I0407 14:49:07.426326   13792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:49:07.426326   13792 out.go:358] Setting ErrFile to fd 1788...
	I0407 14:49:07.426326   13792 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:49:07.446328   13792 out.go:352] Setting JSON to false
	I0407 14:49:07.449330   13792 start.go:129] hostinfo: {"hostname":"minikube3","uptime":9139,"bootTime":1744028207,"procs":183,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 14:49:07.449330   13792 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 14:49:07.453330   13792 out.go:177] * [kubernetes-upgrade-003200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 14:49:07.459331   13792 notify.go:220] Checking for updates...
	I0407 14:49:07.461331   13792 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 14:49:07.464331   13792 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 14:49:07.467347   13792 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 14:49:07.470330   13792 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 14:49:07.473330   13792 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 14:49:07.476338   13792 config.go:182] Loaded profile config "NoKubernetes-817400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:49:07.477331   13792 config.go:182] Loaded profile config "force-systemd-flag-817400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:49:07.477331   13792 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:49:07.478331   13792 config.go:182] Loaded profile config "running-upgrade-817400": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.24.1
	I0407 14:49:07.478331   13792 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 14:49:13.187554   13792 out.go:177] * Using the hyperv driver based on user configuration
	I0407 14:49:13.191449   13792 start.go:297] selected driver: hyperv
	I0407 14:49:13.191596   13792 start.go:901] validating driver "hyperv" against <nil>
	I0407 14:49:13.191596   13792 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 14:49:13.243666   13792 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 14:49:13.244846   13792 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 14:49:13.244846   13792 cni.go:84] Creating CNI manager for ""
	I0407 14:49:13.244846   13792 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0407 14:49:13.245263   13792 start.go:340] cluster config:
	{Name:kubernetes-upgrade-003200 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-003200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluste
r.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1
m0s}
	I0407 14:49:13.245617   13792 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 14:49:13.245976   13792 out.go:177] * Starting "kubernetes-upgrade-003200" primary control-plane node in "kubernetes-upgrade-003200" cluster
	I0407 14:49:13.255253   13792 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 14:49:13.255448   13792 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0407 14:49:13.255507   13792 cache.go:56] Caching tarball of preloaded images
	I0407 14:49:13.255774   13792 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 14:49:13.255774   13792 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0407 14:49:13.255774   13792 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\config.json ...
	I0407 14:49:13.256442   13792 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\config.json: {Name:mk136048ef4cee76a78f953df9c2fb4c467f7d70 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 14:49:13.257812   13792 start.go:360] acquireMachinesLock for kubernetes-upgrade-003200: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 14:51:42.381499   13792 start.go:364] duration metric: took 2m29.1220626s to acquireMachinesLock for "kubernetes-upgrade-003200"
	I0407 14:51:42.381499   13792 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-003200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
20.0 ClusterName:kubernetes-upgrade-003200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 14:51:42.382223   13792 start.go:125] createHost starting for "" (driver="hyperv")
	I0407 14:51:42.386128   13792 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0407 14:51:42.387352   13792 start.go:159] libmachine.API.Create for "kubernetes-upgrade-003200" (driver="hyperv")
	I0407 14:51:42.387352   13792 client.go:168] LocalClient.Create starting
	I0407 14:51:42.390149   13792 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 14:51:42.390841   13792 main.go:141] libmachine: Decoding PEM data...
	I0407 14:51:42.390841   13792 main.go:141] libmachine: Parsing certificate...
	I0407 14:51:42.390841   13792 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 14:51:42.391371   13792 main.go:141] libmachine: Decoding PEM data...
	I0407 14:51:42.391639   13792 main.go:141] libmachine: Parsing certificate...
	I0407 14:51:42.391639   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 14:51:44.417044   13792 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 14:51:44.417292   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:51:44.417292   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 14:51:46.242203   13792 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 14:51:46.242309   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:51:46.242309   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 14:51:47.801122   13792 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 14:51:47.801272   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:51:47.801332   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 14:51:51.709600   13792 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 14:51:51.709600   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:51:51.712153   13792 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 14:51:52.231449   13792 main.go:141] libmachine: Creating SSH key...
	I0407 14:51:52.581265   13792 main.go:141] libmachine: Creating VM...
	I0407 14:51:52.581265   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 14:51:55.908136   13792 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 14:51:55.909134   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:51:55.909250   13792 main.go:141] libmachine: Using switch "Default Switch"
	I0407 14:51:55.909375   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 14:51:57.737315   13792 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 14:51:57.737315   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:51:57.737315   13792 main.go:141] libmachine: Creating VHD
	I0407 14:51:57.737315   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 14:52:01.783058   13792 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\
	                          fixed.vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 107674B5-E3A5-4CFB-ABD9-58D27A6F03A4
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 14:52:01.783058   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:01.783058   13792 main.go:141] libmachine: Writing magic tar header
	I0407 14:52:01.783058   13792 main.go:141] libmachine: Writing SSH key tar header
	I0407 14:52:01.795363   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 14:52:05.543270   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:05.543346   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:05.543385   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\disk.vhd' -SizeBytes 20000MB
	I0407 14:52:08.314422   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:08.314605   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:08.314792   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM kubernetes-upgrade-003200 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200' -SwitchName 'Default Switch' -MemoryStartupBytes 2200MB
	I0407 14:52:12.099994   13792 main.go:141] libmachine: [stdout =====>] : 
	Name                      State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                      ----- ----------- ----------------- ------   ------             -------
	kubernetes-upgrade-003200 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 14:52:12.099994   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:12.100843   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName kubernetes-upgrade-003200 -DynamicMemoryEnabled $false
	I0407 14:52:14.501955   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:14.501955   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:14.501955   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor kubernetes-upgrade-003200 -Count 2
	I0407 14:52:16.818228   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:16.818319   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:16.818534   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName kubernetes-upgrade-003200 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\boot2docker.iso'
	I0407 14:52:19.477137   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:19.477137   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:19.477137   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName kubernetes-upgrade-003200 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\disk.vhd'
	I0407 14:52:22.214027   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:22.214274   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:22.214274   13792 main.go:141] libmachine: Starting VM...
	I0407 14:52:22.214391   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM kubernetes-upgrade-003200
	I0407 14:52:25.453034   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:25.453090   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:25.453090   13792 main.go:141] libmachine: Waiting for host to start...
	I0407 14:52:25.453090   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:52:27.879513   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:52:27.879513   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:27.879513   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:52:30.506949   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:30.507919   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:31.508528   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:52:33.810957   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:52:33.810957   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:33.810957   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:52:36.461399   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:36.461399   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:37.462186   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:52:40.084911   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:52:40.084911   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:40.085118   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:52:42.970289   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:42.970289   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:43.971512   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:52:46.448452   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:52:46.449445   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:46.449513   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:52:49.365625   13792 main.go:141] libmachine: [stdout =====>] : 
	I0407 14:52:49.365625   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:50.366624   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:52:52.975480   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:52:52.975480   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:52.976469   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:52:55.846832   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:52:55.846832   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:55.847463   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:52:58.241322   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:52:58.241322   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:52:58.241601   13792 machine.go:93] provisionDockerMachine start ...
	I0407 14:52:58.241797   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:00.625163   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:00.625163   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:00.625978   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:03.512008   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:03.512008   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:03.517712   13792 main.go:141] libmachine: Using SSH client type: native
	I0407 14:53:03.534773   13792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.208 22 <nil> <nil>}
	I0407 14:53:03.534773   13792 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 14:53:03.683462   13792 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 14:53:03.683532   13792 buildroot.go:166] provisioning hostname "kubernetes-upgrade-003200"
	I0407 14:53:03.683662   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:06.080281   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:06.080281   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:06.080365   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:09.017709   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:09.017829   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:09.024247   13792 main.go:141] libmachine: Using SSH client type: native
	I0407 14:53:09.025670   13792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.208 22 <nil> <nil>}
	I0407 14:53:09.025768   13792 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-003200 && echo "kubernetes-upgrade-003200" | sudo tee /etc/hostname
	I0407 14:53:09.192232   13792 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-003200
	
	I0407 14:53:09.192232   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:11.578402   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:11.579456   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:11.579531   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:14.462567   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:14.462633   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:14.468946   13792 main.go:141] libmachine: Using SSH client type: native
	I0407 14:53:14.469678   13792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.208 22 <nil> <nil>}
	I0407 14:53:14.469678   13792 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-003200' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-003200/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-003200' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 14:53:14.625548   13792 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 14:53:14.625548   13792 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 14:53:14.625548   13792 buildroot.go:174] setting up certificates
	I0407 14:53:14.625548   13792 provision.go:84] configureAuth start
	I0407 14:53:14.625548   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:17.006154   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:17.006207   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:17.006332   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:19.850384   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:19.850384   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:19.851296   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:22.183930   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:22.183930   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:22.184386   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:25.033324   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:25.033324   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:25.033324   13792 provision.go:143] copyHostCerts
	I0407 14:53:25.035707   13792 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 14:53:25.035707   13792 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 14:53:25.036570   13792 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 14:53:25.037231   13792 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 14:53:25.037231   13792 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 14:53:25.038661   13792 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 14:53:25.039285   13792 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 14:53:25.039285   13792 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 14:53:25.039285   13792 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 14:53:25.039285   13792 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.kubernetes-upgrade-003200 san=[127.0.0.1 172.17.94.208 kubernetes-upgrade-003200 localhost minikube]
	I0407 14:53:25.355543   13792 provision.go:177] copyRemoteCerts
	I0407 14:53:25.365587   13792 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 14:53:25.366543   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:27.781600   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:27.781600   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:27.781735   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:30.672857   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:30.672857   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:30.673878   13792 sshutil.go:53] new ssh client: &{IP:172.17.94.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\id_rsa Username:docker}
	I0407 14:53:30.791560   13792 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.423931s)
	I0407 14:53:30.791560   13792 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 14:53:30.849205   13792 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1241 bytes)
	I0407 14:53:30.900821   13792 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0407 14:53:30.953194   13792 provision.go:87] duration metric: took 16.3275135s to configureAuth
	I0407 14:53:30.953194   13792 buildroot.go:189] setting minikube options for container-runtime
	I0407 14:53:30.953780   13792 config.go:182] Loaded profile config "kubernetes-upgrade-003200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.20.0
	I0407 14:53:30.953780   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:33.404382   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:33.404465   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:33.404465   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:36.223571   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:36.224660   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:36.230652   13792 main.go:141] libmachine: Using SSH client type: native
	I0407 14:53:36.231418   13792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.208 22 <nil> <nil>}
	I0407 14:53:36.231418   13792 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 14:53:36.383507   13792 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 14:53:36.383677   13792 buildroot.go:70] root file system type: tmpfs
	I0407 14:53:36.383755   13792 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 14:53:36.383755   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:38.856490   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:38.856651   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:38.856651   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:41.797082   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:41.797082   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:41.803688   13792 main.go:141] libmachine: Using SSH client type: native
	I0407 14:53:41.804419   13792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.208 22 <nil> <nil>}
	I0407 14:53:41.804419   13792 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 14:53:41.959853   13792 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 14:53:41.959853   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:44.325411   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:44.325411   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:44.325824   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:47.262126   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:47.262126   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:47.268286   13792 main.go:141] libmachine: Using SSH client type: native
	I0407 14:53:47.269172   13792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.208 22 <nil> <nil>}
	I0407 14:53:47.269172   13792 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 14:53:49.599395   13792 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 14:53:49.599477   13792 machine.go:96] duration metric: took 51.35746s to provisionDockerMachine
	I0407 14:53:49.599547   13792 client.go:171] duration metric: took 2m7.2110947s to LocalClient.Create
	I0407 14:53:49.599547   13792 start.go:167] duration metric: took 2m7.2111646s to libmachine.API.Create "kubernetes-upgrade-003200"
	I0407 14:53:49.599547   13792 start.go:293] postStartSetup for "kubernetes-upgrade-003200" (driver="hyperv")
	I0407 14:53:49.599547   13792 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 14:53:49.612325   13792 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 14:53:49.612325   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:51.971975   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:51.971975   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:51.971975   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:53:54.785289   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:53:54.785289   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:54.786519   13792 sshutil.go:53] new ssh client: &{IP:172.17.94.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\id_rsa Username:docker}
	I0407 14:53:54.908619   13792 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.2961076s)
	I0407 14:53:54.922531   13792 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 14:53:54.935472   13792 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 14:53:54.935567   13792 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 14:53:54.936113   13792 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 14:53:54.937594   13792 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 14:53:54.952313   13792 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 14:53:54.974955   13792 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 14:53:55.034033   13792 start.go:296] duration metric: took 5.4344413s for postStartSetup
	I0407 14:53:55.037650   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:53:57.380859   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:53:57.380859   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:53:57.381121   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:54:00.042732   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:54:00.042732   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:00.043543   13792 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\config.json ...
	I0407 14:54:00.046815   13792 start.go:128] duration metric: took 2m17.6634769s to createHost
	I0407 14:54:00.046815   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:54:02.317127   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:54:02.317571   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:02.317571   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:54:05.147525   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:54:05.147688   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:05.153544   13792 main.go:141] libmachine: Using SSH client type: native
	I0407 14:54:05.154243   13792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.208 22 <nil> <nil>}
	I0407 14:54:05.154243   13792 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 14:54:05.298378   13792 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744037645.318134110
	
	I0407 14:54:05.298378   13792 fix.go:216] guest clock: 1744037645.318134110
	I0407 14:54:05.298455   13792 fix.go:229] Guest: 2025-04-07 14:54:05.31813411 +0000 UTC Remote: 2025-04-07 14:54:00.0468155 +0000 UTC m=+292.798316601 (delta=5.27131861s)
	I0407 14:54:05.298570   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:54:07.729305   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:54:07.729818   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:07.729950   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:54:10.513747   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:54:10.513747   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:10.522007   13792 main.go:141] libmachine: Using SSH client type: native
	I0407 14:54:10.522552   13792 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.94.208 22 <nil> <nil>}
	I0407 14:54:10.522552   13792 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744037645
	I0407 14:54:10.682960   13792 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 14:54:05 UTC 2025
	
	I0407 14:54:10.682960   13792 fix.go:236] clock set: Mon Apr  7 14:54:05 UTC 2025
	 (err=<nil>)
	I0407 14:54:10.682960   13792 start.go:83] releasing machines lock for "kubernetes-upgrade-003200", held for 2m28.3002591s
	I0407 14:54:10.683335   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:54:13.076864   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:54:13.077253   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:13.077253   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:54:15.982917   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:54:15.982917   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:15.988304   13792 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 14:54:15.988304   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:54:16.002823   13792 ssh_runner.go:195] Run: cat /version.json
	I0407 14:54:16.002823   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 14:54:18.567619   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:54:18.567738   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:18.567973   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:54:18.570138   13792 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:54:18.570138   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:18.570138   13792 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:54:21.501772   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:54:21.502355   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:21.502355   13792 sshutil.go:53] new ssh client: &{IP:172.17.94.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\id_rsa Username:docker}
	I0407 14:54:21.528750   13792 main.go:141] libmachine: [stdout =====>] : 172.17.94.208
	
	I0407 14:54:21.528750   13792 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:54:21.529865   13792 sshutil.go:53] new ssh client: &{IP:172.17.94.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\id_rsa Username:docker}
	I0407 14:54:21.602059   13792 ssh_runner.go:235] Completed: cat /version.json: (5.5991897s)
	I0407 14:54:21.618938   13792 ssh_runner.go:195] Run: systemctl --version
	I0407 14:54:21.624835   13792 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.6364844s)
	W0407 14:54:21.624835   13792 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 14:54:21.643808   13792 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 14:54:21.654202   13792 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 14:54:21.666900   13792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0407 14:54:21.701514   13792 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0407 14:54:21.735451   13792 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 14:54:21.735510   13792 start.go:495] detecting cgroup driver to use...
	I0407 14:54:21.735898   13792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	W0407 14:54:21.781609   13792 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 14:54:21.781609   13792 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	* To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 14:54:21.788614   13792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.2"|' /etc/containerd/config.toml"
	I0407 14:54:21.822512   13792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 14:54:21.844678   13792 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 14:54:21.856783   13792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 14:54:21.892367   13792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 14:54:21.929580   13792 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 14:54:21.966777   13792 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 14:54:22.006258   13792 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 14:54:22.042746   13792 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 14:54:22.081407   13792 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 14:54:22.102596   13792 crio.go:166] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0407 14:54:22.115031   13792 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0407 14:54:22.157889   13792 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 14:54:22.188295   13792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:54:22.430997   13792 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 14:54:22.471503   13792 start.go:495] detecting cgroup driver to use...
	I0407 14:54:22.483268   13792 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 14:54:22.521273   13792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:54:22.573513   13792 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 14:54:22.625344   13792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 14:54:22.666335   13792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 14:54:22.707362   13792 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0407 14:54:22.783992   13792 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 14:54:22.812724   13792 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/dockershim.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 14:54:22.863936   13792 ssh_runner.go:195] Run: which cri-dockerd
	I0407 14:54:22.883249   13792 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 14:54:22.904349   13792 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0407 14:54:22.956578   13792 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 14:54:23.213610   13792 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 14:54:23.425837   13792 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 14:54:23.426190   13792 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 14:54:23.474159   13792 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 14:54:23.727912   13792 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 14:55:24.854974   13792 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.1265613s)
	I0407 14:55:24.867437   13792 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0407 14:55:24.908213   13792 out.go:201] 
	W0407 14:55:24.911123   13792 out.go:270] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 07 14:53:47 kubernetes-upgrade-003200 systemd[1]: Starting Docker Application Container Engine...
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:47.921839478Z" level=info msg="Starting up"
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:47.922952029Z" level=info msg="containerd not running, starting managed containerd"
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:47.925235834Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.958677272Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989077670Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989276179Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989374083Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989394084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989862706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990002812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990361229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990481134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990506335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990519936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990645542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.991070061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.994335111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.994441516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.994593923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.994747530Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.995019543Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.995724075Z" level=info msg="metadata content store policy set" policy=shared
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026047797Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026365810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026458314Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026569219Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026595220Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026851231Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027483759Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027815273Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027874775Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027893776Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027909577Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027947679Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027965679Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027984380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028016982Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028037582Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028053283Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028066484Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028088685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028112586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028128986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028144687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028285493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028322195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028338495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028358196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028386097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028405198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028419999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028435700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028452000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028470601Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028505603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028523603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028656009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028791615Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028820716Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028836117Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028851018Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028881619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028897520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028911020Z" level=info msg="NRI interface is disabled by configuration."
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.029187932Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.029336838Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.029405241Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.029453443Z" level=info msg="containerd successfully booted in 0.072651s"
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:48.999794871Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.037183985Z" level=info msg="Loading containers: start."
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.218656318Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.476826713Z" level=info msg="Loading containers: done."
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.501519603Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.501758613Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.502009523Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.502271833Z" level=info msg="Daemon has completed initialization"
	Apr 07 14:53:49 kubernetes-upgrade-003200 systemd[1]: Started Docker Application Container Engine.
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.619446433Z" level=info msg="API listen on [::]:2376"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.619550637Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.776565428Z" level=info msg="Processing signal 'terminated'"
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.778994627Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.779270427Z" level=info msg="Daemon shutdown complete"
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.779285427Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.779355427Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 07 14:54:23 kubernetes-upgrade-003200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 07 14:54:24 kubernetes-upgrade-003200 systemd[1]: docker.service: Deactivated successfully.
	Apr 07 14:54:24 kubernetes-upgrade-003200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 07 14:54:24 kubernetes-upgrade-003200 systemd[1]: Starting Docker Application Container Engine...
	Apr 07 14:54:24 kubernetes-upgrade-003200 dockerd[1162]: time="2025-04-07T14:54:24.846242501Z" level=info msg="Starting up"
	Apr 07 14:55:24 kubernetes-upgrade-003200 dockerd[1162]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 07 14:55:24 kubernetes-upgrade-003200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 07 14:55:24 kubernetes-upgrade-003200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 07 14:55:24 kubernetes-upgrade-003200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 07 14:53:47 kubernetes-upgrade-003200 systemd[1]: Starting Docker Application Container Engine...
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:47.921839478Z" level=info msg="Starting up"
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:47.922952029Z" level=info msg="containerd not running, starting managed containerd"
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:47.925235834Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=665
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.958677272Z" level=info msg="starting containerd" revision=57f17b0a6295a39009d861b89e3b3b87b005ca27 version=v1.7.23
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989077670Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989276179Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989374083Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989394084Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.989862706Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990002812Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990361229Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990481134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990506335Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990519936Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.990645542Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.991070061Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.994335111Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.994441516Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.994593923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.994747530Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.995019543Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 07 14:53:47 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:47.995724075Z" level=info msg="metadata content store policy set" policy=shared
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026047797Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026365810Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026458314Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026569219Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026595220Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.026851231Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027483759Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027815273Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027874775Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027893776Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027909577Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027947679Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027965679Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.027984380Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028016982Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028037582Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028053283Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028066484Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028088685Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028112586Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028128986Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028144687Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028285493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028322195Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028338495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028358196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028386097Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028405198Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028419999Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028435700Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028452000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028470601Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028505603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028523603Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028656009Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028791615Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028820716Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028836117Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028851018Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028881619Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028897520Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.028911020Z" level=info msg="NRI interface is disabled by configuration."
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.029187932Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.029336838Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.029405241Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[665]: time="2025-04-07T14:53:48.029453443Z" level=info msg="containerd successfully booted in 0.072651s"
	Apr 07 14:53:48 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:48.999794871Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.037183985Z" level=info msg="Loading containers: start."
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.218656318Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.476826713Z" level=info msg="Loading containers: done."
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.501519603Z" level=warning msg="WARNING: bridge-nf-call-iptables is disabled"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.501758613Z" level=warning msg="WARNING: bridge-nf-call-ip6tables is disabled"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.502009523Z" level=info msg="Docker daemon" commit=92a8393 containerd-snapshotter=false storage-driver=overlay2 version=27.4.0
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.502271833Z" level=info msg="Daemon has completed initialization"
	Apr 07 14:53:49 kubernetes-upgrade-003200 systemd[1]: Started Docker Application Container Engine.
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.619446433Z" level=info msg="API listen on [::]:2376"
	Apr 07 14:53:49 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:53:49.619550637Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.776565428Z" level=info msg="Processing signal 'terminated'"
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.778994627Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.779270427Z" level=info msg="Daemon shutdown complete"
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.779285427Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 07 14:54:23 kubernetes-upgrade-003200 dockerd[659]: time="2025-04-07T14:54:23.779355427Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 07 14:54:23 kubernetes-upgrade-003200 systemd[1]: Stopping Docker Application Container Engine...
	Apr 07 14:54:24 kubernetes-upgrade-003200 systemd[1]: docker.service: Deactivated successfully.
	Apr 07 14:54:24 kubernetes-upgrade-003200 systemd[1]: Stopped Docker Application Container Engine.
	Apr 07 14:54:24 kubernetes-upgrade-003200 systemd[1]: Starting Docker Application Container Engine...
	Apr 07 14:54:24 kubernetes-upgrade-003200 dockerd[1162]: time="2025-04-07T14:54:24.846242501Z" level=info msg="Starting up"
	Apr 07 14:55:24 kubernetes-upgrade-003200 dockerd[1162]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 07 14:55:24 kubernetes-upgrade-003200 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 07 14:55:24 kubernetes-upgrade-003200 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 07 14:55:24 kubernetes-upgrade-003200 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0407 14:55:24.911123   13792 out.go:270] * 
	* 
	W0407 14:55:24.913123   13792 out.go:293] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0407 14:55:24.916119   13792 out.go:201] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-003200 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperv: exit status 90
version_upgrade_test.go:227: (dbg) Run:  out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-003200
version_upgrade_test.go:227: (dbg) Done: out/minikube-windows-amd64.exe stop -p kubernetes-upgrade-003200: (1m9.1283672s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-003200 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p kubernetes-upgrade-003200 status --format={{.Host}}: exit status 7 (2.687046s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-003200 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=hyperv
E0407 14:56:55.785574    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:243: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-003200 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=hyperv: (6m38.6340351s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-003200 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-003200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-003200 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperv: exit status 106 (295.8362ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-003200] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.32.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-003200
	    minikube start -p kubernetes-upgrade-003200 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0032002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.32.2, by running:
	    
	    minikube start -p kubernetes-upgrade-003200 --kubernetes-version=v1.32.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-windows-amd64.exe start -p kubernetes-upgrade-003200 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=hyperv
E0407 15:03:54.502207    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:275: (dbg) Done: out/minikube-windows-amd64.exe start -p kubernetes-upgrade-003200 --memory=2200 --kubernetes-version=v1.32.2 --alsologtostderr -v=1 --driver=hyperv: (6m12.4338236s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2025-04-07 15:09:28.5868894 +0000 UTC m=+10231.831276601
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-003200 -n kubernetes-upgrade-003200
helpers_test.go:239: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.Host}} -p kubernetes-upgrade-003200 -n kubernetes-upgrade-003200: (12.8224931s)
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p kubernetes-upgrade-003200 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p kubernetes-upgrade-003200 logs -n 25: (9.2361761s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-004500 sudo cat              | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo cat              | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | containerd config dump                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl status crio --all            |                           |                   |         |                     |                     |
	|         | --full --no-pager                      |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo find             | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo crio             | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | config                                 |                           |                   |         |                     |                     |
	| delete  | -p cilium-004500                       | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC | 07 Apr 25 14:51 UTC |
	| start   | -p pause-061700 --memory=2048          | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC | 07 Apr 25 15:00 UTC |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv             |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-817400              | running-upgrade-817400    | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:52 UTC | 07 Apr 25 15:01 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:55 UTC | 07 Apr 25 14:56 UTC |
	| start   | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:56 UTC | 07 Apr 25 15:03 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-523500 stop            | minikube                  | minikube3\jenkins | v1.26.0 | 07 Apr 25 14:57 GMT | 07 Apr 25 14:58 GMT |
	| start   | -p stopped-upgrade-523500              | stopped-upgrade-523500    | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:58 UTC | 07 Apr 25 15:04 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:00 UTC | 07 Apr 25 15:05 UTC |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-817400              | running-upgrade-817400    | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:01 UTC | 07 Apr 25 15:02 UTC |
	| start   | -p cert-expiration-287100              | cert-expiration-287100    | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:02 UTC | 07 Apr 25 15:08 UTC |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:03 UTC |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:03 UTC | 07 Apr 25 15:09 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-523500              | stopped-upgrade-523500    | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:05 UTC | 07 Apr 25 15:05 UTC |
	| start   | -p docker-flags-422800                 | docker-flags-422800       | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:05 UTC |                     |
	|         | --cache-images=false                   |                           |                   |         |                     |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=false                           |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                   |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                   |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                     |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| pause   | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:05 UTC | 07 Apr 25 15:06 UTC |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	| unpause | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:06 UTC |                     |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	| delete  | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:07 UTC | 07 Apr 25 15:08 UTC |
	| start   | -p force-systemd-env-498800            | force-systemd-env-498800  | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:08 UTC |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 15:08:19
	Running on machine: minikube3
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 15:08:19.681390   10064 out.go:345] Setting OutFile to fd 1640 ...
	I0407 15:08:19.766387   10064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 15:08:19.766387   10064 out.go:358] Setting ErrFile to fd 1884...
	I0407 15:08:19.766387   10064 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 15:08:19.786766   10064 out.go:352] Setting JSON to false
	I0407 15:08:19.789926   10064 start.go:129] hostinfo: {"hostname":"minikube3","uptime":10292,"bootTime":1744028207,"procs":185,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 15:08:19.789977   10064 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 15:08:19.796285   10064 out.go:177] * [force-systemd-env-498800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 15:08:19.800672   10064 notify.go:220] Checking for updates...
	I0407 15:08:19.800884   10064 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 15:08:19.804175   10064 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 15:08:19.807223   10064 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 15:08:19.810375   10064 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 15:08:19.812916   10064 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=true
	I0407 15:08:17.504192    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:17.504192    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:17.504351    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 15:08:20.184974    8516 main.go:141] libmachine: [stdout =====>] : 172.17.91.18
	
	I0407 15:08:20.184974    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:20.189879    8516 main.go:141] libmachine: Using SSH client type: native
	I0407 15:08:20.190942    8516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.91.18 22 <nil> <nil>}
	I0407 15:08:20.190942    8516 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 15:08:20.360843    8516 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 15:08:20.360908    8516 machine.go:96] duration metric: took 47.7242335s to provisionDockerMachine
	I0407 15:08:20.360908    8516 start.go:293] postStartSetup for "kubernetes-upgrade-003200" (driver="hyperv")
	I0407 15:08:20.360978    8516 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 15:08:20.379031    8516 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 15:08:20.379031    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:08:19.816592   10064 config.go:182] Loaded profile config "cert-expiration-287100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:08:19.817263   10064 config.go:182] Loaded profile config "docker-flags-422800": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:08:19.817656   10064 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:08:19.817656   10064 config.go:182] Loaded profile config "kubernetes-upgrade-003200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:08:19.818671   10064 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 15:08:25.501437   10064 out.go:177] * Using the hyperv driver based on user configuration
	I0407 15:08:25.506115   10064 start.go:297] selected driver: hyperv
	I0407 15:08:25.506150   10064 start.go:901] validating driver "hyperv" against <nil>
	I0407 15:08:25.506223   10064 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 15:08:25.556254   10064 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 15:08:25.557411   10064 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 15:08:25.557515   10064 cni.go:84] Creating CNI manager for ""
	I0407 15:08:25.557515   10064 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 15:08:25.557579   10064 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 15:08:25.557740   10064 start.go:340] cluster config:
	{Name:force-systemd-env-498800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:force-systemd-env-498800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:
1m0s}
	I0407 15:08:25.558015   10064 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 15:08:25.564015   10064 out.go:177] * Starting "force-systemd-env-498800" primary control-plane node in "force-systemd-env-498800" cluster
	I0407 15:08:22.656802    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:22.656879    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:22.656994    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 15:08:25.430463    8516 main.go:141] libmachine: [stdout =====>] : 172.17.91.18
	
	I0407 15:08:25.430992    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:25.431245    8516 sshutil.go:53] new ssh client: &{IP:172.17.91.18 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\id_rsa Username:docker}
	I0407 15:08:25.538678    8516 ssh_runner.go:235] Completed: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs: (5.1596039s)
	I0407 15:08:25.551289    8516 ssh_runner.go:195] Run: cat /etc/os-release
	I0407 15:08:25.559503    8516 info.go:137] Remote host: Buildroot 2023.02.9
	I0407 15:08:25.559577    8516 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\addons for local assets ...
	I0407 15:08:25.559903    8516 filesync.go:126] Scanning C:\Users\jenkins.minikube3\minikube-integration\.minikube\files for local assets ...
	I0407 15:08:25.560724    8516 filesync.go:149] local asset: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem -> 77282.pem in /etc/ssl/certs
	I0407 15:08:25.573067    8516 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0407 15:08:25.593400    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /etc/ssl/certs/77282.pem (1708 bytes)
	I0407 15:08:25.643157    8516 start.go:296] duration metric: took 5.2822052s for postStartSetup
	I0407 15:08:25.643235    8516 fix.go:56] duration metric: took 55.3918193s for fixHost
	I0407 15:08:25.643429    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:08:27.005888   13864 kubeadm.go:310] [init] Using Kubernetes version: v1.32.2
	I0407 15:08:27.005888   13864 kubeadm.go:310] [preflight] Running pre-flight checks
	I0407 15:08:27.005888   13864 kubeadm.go:310] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0407 15:08:27.005888   13864 kubeadm.go:310] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0407 15:08:27.005888   13864 kubeadm.go:310] [preflight] You can also perform this action beforehand using 'kubeadm config images pull'
	I0407 15:08:27.006759   13864 kubeadm.go:310] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0407 15:08:27.009182   13864 out.go:235]   - Generating certificates and keys ...
	I0407 15:08:27.009513   13864 kubeadm.go:310] [certs] Using existing ca certificate authority
	I0407 15:08:27.009595   13864 kubeadm.go:310] [certs] Using existing apiserver certificate and key on disk
	I0407 15:08:27.009716   13864 kubeadm.go:310] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0407 15:08:27.009917   13864 kubeadm.go:310] [certs] Generating "front-proxy-ca" certificate and key
	I0407 15:08:27.010012   13864 kubeadm.go:310] [certs] Generating "front-proxy-client" certificate and key
	I0407 15:08:27.010092   13864 kubeadm.go:310] [certs] Generating "etcd/ca" certificate and key
	I0407 15:08:27.010203   13864 kubeadm.go:310] [certs] Generating "etcd/server" certificate and key
	I0407 15:08:27.010484   13864 kubeadm.go:310] [certs] etcd/server serving cert is signed for DNS names [cert-expiration-287100 localhost] and IPs [172.17.86.101 127.0.0.1 ::1]
	I0407 15:08:27.010653   13864 kubeadm.go:310] [certs] Generating "etcd/peer" certificate and key
	I0407 15:08:27.010909   13864 kubeadm.go:310] [certs] etcd/peer serving cert is signed for DNS names [cert-expiration-287100 localhost] and IPs [172.17.86.101 127.0.0.1 ::1]
	I0407 15:08:27.011064   13864 kubeadm.go:310] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0407 15:08:27.011154   13864 kubeadm.go:310] [certs] Generating "apiserver-etcd-client" certificate and key
	I0407 15:08:27.011348   13864 kubeadm.go:310] [certs] Generating "sa" key and public key
	I0407 15:08:27.011437   13864 kubeadm.go:310] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0407 15:08:27.011437   13864 kubeadm.go:310] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0407 15:08:27.011610   13864 kubeadm.go:310] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0407 15:08:27.011703   13864 kubeadm.go:310] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0407 15:08:27.011915   13864 kubeadm.go:310] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0407 15:08:27.011915   13864 kubeadm.go:310] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0407 15:08:27.011915   13864 kubeadm.go:310] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0407 15:08:27.011915   13864 kubeadm.go:310] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0407 15:08:27.016002   13864 out.go:235]   - Booting up control plane ...
	I0407 15:08:27.016002   13864 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0407 15:08:27.016002   13864 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0407 15:08:27.016002   13864 kubeadm.go:310] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0407 15:08:27.016002   13864 kubeadm.go:310] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0407 15:08:27.016002   13864 kubeadm.go:310] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0407 15:08:27.016002   13864 kubeadm.go:310] [kubelet-start] Starting the kubelet
	I0407 15:08:27.017239   13864 kubeadm.go:310] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0407 15:08:27.017277   13864 kubeadm.go:310] [kubelet-check] Waiting for a healthy kubelet at http://127.0.0.1:10248/healthz. This can take up to 4m0s
	I0407 15:08:27.017277   13864 kubeadm.go:310] [kubelet-check] The kubelet is healthy after 2.089035015s
	I0407 15:08:27.017277   13864 kubeadm.go:310] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0407 15:08:27.017277   13864 kubeadm.go:310] [api-check] The API server is healthy after 11.502316255s
	I0407 15:08:27.017842   13864 kubeadm.go:310] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0407 15:08:27.017842   13864 kubeadm.go:310] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0407 15:08:27.017842   13864 kubeadm.go:310] [upload-certs] Skipping phase. Please see --upload-certs
	I0407 15:08:27.017842   13864 kubeadm.go:310] [mark-control-plane] Marking the node cert-expiration-287100 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0407 15:08:27.017842   13864 kubeadm.go:310] [bootstrap-token] Using token: o0v5l0.d9tylxdtrgk0kf34
	I0407 15:08:27.021847   13864 out.go:235]   - Configuring RBAC rules ...
	I0407 15:08:27.021847   13864 kubeadm.go:310] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0407 15:08:27.022196   13864 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0407 15:08:27.022196   13864 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0407 15:08:27.022196   13864 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0407 15:08:27.023198   13864 kubeadm.go:310] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0407 15:08:27.023198   13864 kubeadm.go:310] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0407 15:08:27.023198   13864 kubeadm.go:310] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0407 15:08:27.023198   13864 kubeadm.go:310] [addons] Applied essential addon: CoreDNS
	I0407 15:08:27.023198   13864 kubeadm.go:310] [addons] Applied essential addon: kube-proxy
	I0407 15:08:27.023198   13864 kubeadm.go:310] 
	I0407 15:08:27.023198   13864 kubeadm.go:310] Your Kubernetes control-plane has initialized successfully!
	I0407 15:08:27.023198   13864 kubeadm.go:310] 
	I0407 15:08:27.023198   13864 kubeadm.go:310] To start using your cluster, you need to run the following as a regular user:
	I0407 15:08:27.023198   13864 kubeadm.go:310] 
	I0407 15:08:27.023198   13864 kubeadm.go:310]   mkdir -p $HOME/.kube
	I0407 15:08:27.023198   13864 kubeadm.go:310]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0407 15:08:27.024197   13864 kubeadm.go:310]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0407 15:08:27.024197   13864 kubeadm.go:310] 
	I0407 15:08:27.024197   13864 kubeadm.go:310] Alternatively, if you are the root user, you can run:
	I0407 15:08:27.024197   13864 kubeadm.go:310] 
	I0407 15:08:27.024197   13864 kubeadm.go:310]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0407 15:08:27.024197   13864 kubeadm.go:310] 
	I0407 15:08:27.024197   13864 kubeadm.go:310] You should now deploy a pod network to the cluster.
	I0407 15:08:27.024197   13864 kubeadm.go:310] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0407 15:08:27.024197   13864 kubeadm.go:310]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0407 15:08:27.024197   13864 kubeadm.go:310] 
	I0407 15:08:27.024197   13864 kubeadm.go:310] You can now join any number of control-plane nodes by copying certificate authorities
	I0407 15:08:27.024197   13864 kubeadm.go:310] and service account keys on each node and then running the following as root:
	I0407 15:08:27.024197   13864 kubeadm.go:310] 
	I0407 15:08:27.025330   13864 kubeadm.go:310]   kubeadm join control-plane.minikube.internal:8443 --token o0v5l0.d9tylxdtrgk0kf34 \
	I0407 15:08:27.025330   13864 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 \
	I0407 15:08:27.025330   13864 kubeadm.go:310] 	--control-plane 
	I0407 15:08:27.025330   13864 kubeadm.go:310] 
	I0407 15:08:27.025330   13864 kubeadm.go:310] Then you can join any number of worker nodes by running the following on each as root:
	I0407 15:08:27.025330   13864 kubeadm.go:310] 
	I0407 15:08:27.025330   13864 kubeadm.go:310] kubeadm join control-plane.minikube.internal:8443 --token o0v5l0.d9tylxdtrgk0kf34 \
	I0407 15:08:27.025330   13864 kubeadm.go:310] 	--discovery-token-ca-cert-hash sha256:e47514a965ab97a4e144f4fd6f094d10c7d6accee7eb763a802bd4fd6fa9e479 
	I0407 15:08:27.026207   13864 cni.go:84] Creating CNI manager for ""
	I0407 15:08:27.026207   13864 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 15:08:27.029194   13864 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 15:08:27.044294   13864 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 15:08:27.063781   13864 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 15:08:27.101028   13864 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 15:08:27.115132   13864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0407 15:08:27.115132   13864 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes cert-expiration-287100 minikube.k8s.io/updated_at=2025_04_07T15_08_27_0700 minikube.k8s.io/version=v1.35.0 minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277 minikube.k8s.io/name=cert-expiration-287100 minikube.k8s.io/primary=true
	I0407 15:08:27.137791   13864 ops.go:34] apiserver oom_adj: -16
	I0407 15:08:27.665493   13864 kubeadm.go:1113] duration metric: took 564.3766ms to wait for elevateKubeSystemPrivileges
	I0407 15:08:27.665493   13864 kubeadm.go:394] duration metric: took 22.7947786s to StartCluster
	I0407 15:08:27.665587   13864 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 15:08:27.665716   13864 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 15:08:27.667578   13864 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 15:08:27.668962   13864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0407 15:08:27.668962   13864 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.86.101 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 15:08:27.668962   13864 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 15:08:27.668962   13864 addons.go:69] Setting storage-provisioner=true in profile "cert-expiration-287100"
	I0407 15:08:27.669520   13864 addons.go:238] Setting addon storage-provisioner=true in "cert-expiration-287100"
	I0407 15:08:27.669587   13864 addons.go:69] Setting default-storageclass=true in profile "cert-expiration-287100"
	I0407 15:08:27.669645   13864 host.go:66] Checking if "cert-expiration-287100" exists ...
	I0407 15:08:27.669645   13864 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "cert-expiration-287100"
	I0407 15:08:27.670042   13864 config.go:182] Loaded profile config "cert-expiration-287100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:08:27.670666   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:08:27.671309   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:08:27.673368   13864 out.go:177] * Verifying Kubernetes components...
	I0407 15:08:25.566435   10064 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 15:08:25.566487   10064 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 15:08:25.566487   10064 cache.go:56] Caching tarball of preloaded images
	I0407 15:08:25.566487   10064 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 15:08:25.567120   10064 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 15:08:25.567184   10064 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\force-systemd-env-498800\config.json ...
	I0407 15:08:25.567184   10064 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\force-systemd-env-498800\config.json: {Name:mk7824f79c184f8ee59f393395d69646e48d5083 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 15:08:25.568578   10064 start.go:360] acquireMachinesLock for force-systemd-env-498800: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 15:08:27.689208   13864 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 15:08:28.022418   13864 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           172.17.80.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.32.2/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0407 15:08:28.162417   13864 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 15:08:28.517775   13864 start.go:971] {"host.minikube.internal": 172.17.80.1} host record injected into CoreDNS's ConfigMap
	I0407 15:08:28.521931   13864 api_server.go:52] waiting for apiserver process to appear ...
	I0407 15:08:28.533039   13864 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 15:08:28.571310   13864 api_server.go:72] duration metric: took 902.3398ms to wait for apiserver process to appear ...
	I0407 15:08:28.571392   13864 api_server.go:88] waiting for apiserver healthz status ...
	I0407 15:08:28.571392   13864 api_server.go:253] Checking apiserver healthz at https://172.17.86.101:8443/healthz ...
	I0407 15:08:28.580692   13864 api_server.go:279] https://172.17.86.101:8443/healthz returned 200:
	ok
	I0407 15:08:28.582903   13864 api_server.go:141] control plane version: v1.32.2
	I0407 15:08:28.582903   13864 api_server.go:131] duration metric: took 11.511ms to wait for apiserver health ...
	I0407 15:08:28.582903   13864 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 15:08:28.594404   13864 system_pods.go:59] 4 kube-system pods found
	I0407 15:08:28.594404   13864 system_pods.go:61] "etcd-cert-expiration-287100" [720be1fb-8894-47bc-a624-63de0ea5e42b] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 15:08:28.594404   13864 system_pods.go:61] "kube-apiserver-cert-expiration-287100" [f99dabb0-c10d-4f4d-bbdd-49c84590099d] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 15:08:28.594404   13864 system_pods.go:61] "kube-controller-manager-cert-expiration-287100" [a7f11c35-8475-4f0d-8ce4-aa4f03a7ba16] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 15:08:28.594404   13864 system_pods.go:61] "kube-scheduler-cert-expiration-287100" [9d6d824d-7eb0-4865-a22c-86e1e11d1354] Running
	I0407 15:08:28.594404   13864 system_pods.go:74] duration metric: took 11.5011ms to wait for pod list to return data ...
	I0407 15:08:28.594404   13864 kubeadm.go:582] duration metric: took 925.4339ms to wait for: map[apiserver:true system_pods:true]
	I0407 15:08:28.594404   13864 node_conditions.go:102] verifying NodePressure condition ...
	I0407 15:08:28.598998   13864 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 15:08:28.598998   13864 node_conditions.go:123] node cpu capacity is 2
	I0407 15:08:28.598998   13864 node_conditions.go:105] duration metric: took 4.5938ms to run NodePressure ...
	I0407 15:08:28.598998   13864 start.go:241] waiting for startup goroutines ...
	I0407 15:08:29.029590   13864 kapi.go:214] "coredns" deployment in "kube-system" namespace and "cert-expiration-287100" context rescaled to 1 replicas
	I0407 15:08:30.759838   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:30.760011   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:30.761972   13864 addons.go:238] Setting addon default-storageclass=true in "cert-expiration-287100"
	I0407 15:08:30.762499   13864 host.go:66] Checking if "cert-expiration-287100" exists ...
	I0407 15:08:30.763192   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:08:30.763192   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:30.763192   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:30.766881   13864 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 15:08:30.769573   13864 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 15:08:30.769573   13864 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 15:08:30.769573   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:08:28.603002    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:28.603935    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:28.603990    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 15:08:33.251255   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:33.251255   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:33.251255   13864 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 15:08:33.252162   13864 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 15:08:33.252162   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:08:33.297571   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:33.297571   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:33.297571   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:08:35.771012   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:35.771012   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:35.771192   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:08:36.174868   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:08:36.174868   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:36.175850   13864 sshutil.go:53] new ssh client: &{IP:172.17.86.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\cert-expiration-287100\id_rsa Username:docker}
	I0407 15:08:31.413378    8516 main.go:141] libmachine: [stdout =====>] : 172.17.91.18
	
	I0407 15:08:31.413378    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:31.423340    8516 main.go:141] libmachine: Using SSH client type: native
	I0407 15:08:31.423340    8516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.91.18 22 <nil> <nil>}
	I0407 15:08:31.423340    8516 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0407 15:08:31.598532    8516 main.go:141] libmachine: SSH cmd err, output: <nil>: 1744038511.606392922
	
	I0407 15:08:31.598532    8516 fix.go:216] guest clock: 1744038511.606392922
	I0407 15:08:31.598532    8516 fix.go:229] Guest: 2025-04-07 15:08:31.606392922 +0000 UTC Remote: 2025-04-07 15:08:25.64335 +0000 UTC m=+309.470778701 (delta=5.963042922s)
	I0407 15:08:31.598532    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:08:34.073597    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:34.074643    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:34.074643    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 15:08:36.993736   13960 start.go:364] duration metric: took 2m35.8161832s to acquireMachinesLock for "docker-flags-422800"
	I0407 15:08:36.994043   13960 start.go:93] Provisioning new machine with config: &{Name:docker-flags-422800 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig
:{KubernetesVersion:v1.32.2 ClusterName:docker-flags-422800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSiz
e:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 15:08:36.994228   13960 start.go:125] createHost starting for "" (driver="hyperv")
	I0407 15:08:36.344524   13864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 15:08:38.591487   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:08:38.591487   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:38.591648   13864 sshutil.go:53] new ssh client: &{IP:172.17.86.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\cert-expiration-287100\id_rsa Username:docker}
	I0407 15:08:38.734976   13864 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 15:08:38.915573   13864 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0407 15:08:38.919513   13864 addons.go:514] duration metric: took 11.250457s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0407 15:08:38.919513   13864 start.go:246] waiting for cluster config update ...
	I0407 15:08:38.919513   13864 start.go:255] writing updated cluster config ...
	I0407 15:08:38.930492   13864 ssh_runner.go:195] Run: rm -f paused
	I0407 15:08:39.085799   13864 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 15:08:39.089746   13864 out.go:177] * Done! kubectl is now configured to use "cert-expiration-287100" cluster and "default" namespace by default
	I0407 15:08:36.999163   13960 out.go:235] * Creating hyperv VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	I0407 15:08:36.999771   13960 start.go:159] libmachine.API.Create for "docker-flags-422800" (driver="hyperv")
	I0407 15:08:36.999952   13960 client.go:168] LocalClient.Create starting
	I0407 15:08:37.000446   13960 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem
	I0407 15:08:37.000446   13960 main.go:141] libmachine: Decoding PEM data...
	I0407 15:08:37.000992   13960 main.go:141] libmachine: Parsing certificate...
	I0407 15:08:37.001202   13960 main.go:141] libmachine: Reading certificate data from C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem
	I0407 15:08:37.001202   13960 main.go:141] libmachine: Decoding PEM data...
	I0407 15:08:37.001202   13960 main.go:141] libmachine: Parsing certificate...
	I0407 15:08:37.001202   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @(Get-Module -ListAvailable hyper-v).Name | Get-Unique
	I0407 15:08:39.131298   13960 main.go:141] libmachine: [stdout =====>] : Hyper-V
	
	I0407 15:08:39.131397   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:39.131496   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole(([System.Security.Principal.SecurityIdentifier]::new("S-1-5-32-578")))
	I0407 15:08:36.826291    8516 main.go:141] libmachine: [stdout =====>] : 172.17.91.18
	
	I0407 15:08:36.826291    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:36.833136    8516 main.go:141] libmachine: Using SSH client type: native
	I0407 15:08:36.833970    8516 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.91.18 22 <nil> <nil>}
	I0407 15:08:36.833970    8516 main.go:141] libmachine: About to run SSH command:
	sudo date -s @1744038511
	I0407 15:08:36.993297    8516 main.go:141] libmachine: SSH cmd err, output: <nil>: Mon Apr  7 15:08:31 UTC 2025
	
	I0407 15:08:36.993297    8516 fix.go:236] clock set: Mon Apr  7 15:08:31 UTC 2025
	 (err=<nil>)
	I0407 15:08:36.993375    8516 start.go:83] releasing machines lock for "kubernetes-upgrade-003200", held for 1m6.7421061s
	I0407 15:08:36.993649    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:08:39.389128    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:39.389128    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:39.389233    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 15:08:40.987552   13960 main.go:141] libmachine: [stdout =====>] : False
	
	I0407 15:08:40.987822   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:40.987822   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 15:08:42.564135   13960 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 15:08:42.564135   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:42.564628   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 15:08:42.094623    8516 main.go:141] libmachine: [stdout =====>] : 172.17.91.18
	
	I0407 15:08:42.094623    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:42.098857    8516 ssh_runner.go:195] Run: curl.exe -sS -m 2 https://registry.k8s.io/
	I0407 15:08:42.098955    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:08:42.109218    8516 ssh_runner.go:195] Run: cat /version.json
	I0407 15:08:42.109218    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:08:44.474503    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:44.474614    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:44.474786    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 15:08:44.481738    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:08:44.481738    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:44.481738    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 15:08:46.596881   13960 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 15:08:46.597137   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:46.599890   13960 main.go:141] libmachine: Downloading C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\boot2docker.iso from file://C:/Users/jenkins.minikube3/minikube-integration/.minikube/cache/iso/amd64/minikube-v1.35.0-amd64.iso...
	I0407 15:08:47.100862   13960 main.go:141] libmachine: Creating SSH key...
	I0407 15:08:47.342141   13960 main.go:141] libmachine: Creating VM...
	I0407 15:08:47.342242   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive [Console]::OutputEncoding = [Text.Encoding]::UTF8; ConvertTo-Json @(Hyper-V\Get-VMSwitch|Select Id, Name, SwitchType|Where-Object {($_.SwitchType -eq 'External') -or ($_.Id -eq 'c08cb7b8-9b3c-408e-8e30-5e16a3aeb444')}|Sort-Object -Property SwitchType)
	I0407 15:08:47.212140    8516 main.go:141] libmachine: [stdout =====>] : 172.17.91.18
	
	I0407 15:08:47.212140    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:47.212502    8516 sshutil.go:53] new ssh client: &{IP:172.17.91.18 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\id_rsa Username:docker}
	I0407 15:08:47.243208    8516 main.go:141] libmachine: [stdout =====>] : 172.17.91.18
	
	I0407 15:08:47.243208    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:47.243658    8516 sshutil.go:53] new ssh client: &{IP:172.17.91.18 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\id_rsa Username:docker}
	I0407 15:08:47.307315    8516 ssh_runner.go:235] Completed: curl.exe -sS -m 2 https://registry.k8s.io/: (5.2084146s)
	W0407 15:08:47.307443    8516 start.go:867] [curl.exe -sS -m 2 https://registry.k8s.io/] failed: curl.exe -sS -m 2 https://registry.k8s.io/: Process exited with status 127
	stdout:
	
	stderr:
	bash: line 1: curl.exe: command not found
	I0407 15:08:47.341747    8516 ssh_runner.go:235] Completed: cat /version.json: (5.232415s)
	I0407 15:08:47.353427    8516 ssh_runner.go:195] Run: systemctl --version
	I0407 15:08:47.378520    8516 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0407 15:08:47.388441    8516 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0407 15:08:47.399982    8516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	W0407 15:08:47.424819    8516 out.go:270] ! Failing to connect to https://registry.k8s.io/ from inside the minikube VM
	W0407 15:08:47.425011    8516 out.go:270] * To pull new external images, you may need to configure a proxy: https://minikube.sigs.k8s.io/docs/reference/networking/proxy/
	I0407 15:08:47.437135    8516 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%p, " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0407 15:08:47.475145    8516 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0407 15:08:47.475145    8516 start.go:495] detecting cgroup driver to use...
	I0407 15:08:47.475518    8516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 15:08:47.539905    8516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0407 15:08:47.571801    8516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0407 15:08:47.592368    8516 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0407 15:08:47.601974    8516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0407 15:08:47.636758    8516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 15:08:47.670722    8516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0407 15:08:47.704411    8516 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0407 15:08:47.736231    8516 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0407 15:08:47.771183    8516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0407 15:08:47.814808    8516 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0407 15:08:47.858807    8516 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0407 15:08:47.893732    8516 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0407 15:08:47.935447    8516 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0407 15:08:47.966926    8516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 15:08:48.256940    8516 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0407 15:08:48.300812    8516 start.go:495] detecting cgroup driver to use...
	I0407 15:08:48.312742    8516 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0407 15:08:48.352377    8516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 15:08:48.386694    8516 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0407 15:08:48.444051    8516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0407 15:08:48.484781    8516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0407 15:08:48.508039    8516 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0407 15:08:48.558962    8516 ssh_runner.go:195] Run: which cri-dockerd
	I0407 15:08:48.576996    8516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0407 15:08:48.595151    8516 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0407 15:08:48.638335    8516 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0407 15:08:48.910314    8516 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0407 15:08:49.185576    8516 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0407 15:08:49.185931    8516 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0407 15:08:49.234974    8516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 15:08:49.515016    8516 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0407 15:08:50.557942   13960 main.go:141] libmachine: [stdout =====>] : [
	    {
	        "Id":  "c08cb7b8-9b3c-408e-8e30-5e16a3aeb444",
	        "Name":  "Default Switch",
	        "SwitchType":  1
	    }
	]
	
	I0407 15:08:50.558112   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:50.558218   13960 main.go:141] libmachine: Using switch "Default Switch"
	I0407 15:08:50.558380   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive @([Security.Principal.WindowsPrincipal][Security.Principal.WindowsIdentity]::GetCurrent()).IsInRole([Security.Principal.WindowsBuiltInRole] "Administrator")
	I0407 15:08:52.362503   13960 main.go:141] libmachine: [stdout =====>] : True
	
	I0407 15:08:52.362704   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:52.362704   13960 main.go:141] libmachine: Creating VHD
	I0407 15:08:52.362704   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\docker-flags-422800\fixed.vhd' -SizeBytes 10MB -Fixed
	I0407 15:08:56.113270   13960 main.go:141] libmachine: [stdout =====>] : 
	
	ComputerName            : minikube3
	Path                    : C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\docker-flags-422800\fixed.
	                          vhd
	VhdFormat               : VHD
	VhdType                 : Fixed
	FileSize                : 10486272
	Size                    : 10485760
	MinimumSize             : 
	LogicalSectorSize       : 512
	PhysicalSectorSize      : 512
	BlockSize               : 0
	ParentPath              : 
	DiskIdentifier          : 7667E722-3C70-490A-8222-2C2B4F31624A
	FragmentationPercentage : 0
	Alignment               : 1
	Attached                : False
	DiskNumber              : 
	IsPMEMCompatible        : False
	AddressAbstractionType  : None
	Number                  : 
	
	
	
	
	I0407 15:08:56.114084   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:56.114084   13960 main.go:141] libmachine: Writing magic tar header
	I0407 15:08:56.114084   13960 main.go:141] libmachine: Writing SSH key tar header
	I0407 15:08:56.126651   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Convert-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\docker-flags-422800\fixed.vhd' -DestinationPath 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\docker-flags-422800\disk.vhd' -VHDType Dynamic -DeleteSource
	I0407 15:08:59.304831   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:08:59.304831   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:08:59.305040   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Resize-VHD -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\docker-flags-422800\disk.vhd' -SizeBytes 20000MB
	I0407 15:09:02.779397    8516 ssh_runner.go:235] Completed: sudo systemctl restart docker: (13.2641945s)
	I0407 15:09:02.790989    8516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0407 15:09:02.833663    8516 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0407 15:09:02.883515    8516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 15:09:02.927753    8516 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0407 15:09:03.171372    8516 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0407 15:09:03.383072    8516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 15:09:03.618258    8516 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0407 15:09:03.662186    8516 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0407 15:09:03.698647    8516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 15:09:03.922995    8516 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0407 15:09:04.062272    8516 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0407 15:09:04.076551    8516 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0407 15:09:04.087797    8516 start.go:563] Will wait 60s for crictl version
	I0407 15:09:04.100096    8516 ssh_runner.go:195] Run: which crictl
	I0407 15:09:04.118998    8516 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0407 15:09:04.171527    8516 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.4.0
	RuntimeApiVersion:  v1
	I0407 15:09:04.181218    8516 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 15:09:04.227486    8516 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0407 15:09:01.864575   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:09:01.865520   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:01.865520   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\New-VM docker-flags-422800 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\docker-flags-422800' -SwitchName 'Default Switch' -MemoryStartupBytes 2048MB
	I0407 15:09:04.266828    8516 out.go:235] * Preparing Kubernetes v1.32.2 on Docker 27.4.0 ...
	I0407 15:09:04.266931    8516 ip.go:176] getIPForInterface: searching for "vEthernet (Default Switch)"
	I0407 15:09:04.271638    8516 ip.go:190] "Ethernet 2" does not match prefix "vEthernet (Default Switch)"
	I0407 15:09:04.271638    8516 ip.go:190] "Loopback Pseudo-Interface 1" does not match prefix "vEthernet (Default Switch)"
	I0407 15:09:04.271638    8516 ip.go:185] found prefix matching interface for "vEthernet (Default Switch)": "vEthernet (Default Switch)"
	I0407 15:09:04.271638    8516 ip.go:211] Found interface: {Index:9 MTU:1500 Name:vEthernet (Default Switch) HardwareAddr:00:15:5d:71:e6:e3 Flags:up|broadcast|multicast|running}
	I0407 15:09:04.274486    8516 ip.go:214] interface addr: fe80::8b22:fcbb:f73:c9e5/64
	I0407 15:09:04.274486    8516 ip.go:214] interface addr: 172.17.80.1/20
	I0407 15:09:04.284480    8516 ssh_runner.go:195] Run: grep 172.17.80.1	host.minikube.internal$ /etc/hosts
	I0407 15:09:04.291286    8516 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-003200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:ku
bernetes-upgrade-003200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.91.18 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:
docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0407 15:09:04.291420    8516 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 15:09:04.301058    8516 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 15:09:04.333627    8516 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 15:09:04.333686    8516 docker.go:619] Images already preloaded, skipping extraction
	I0407 15:09:04.343543    8516 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0407 15:09:04.376366    8516 docker.go:689] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.32.2
	registry.k8s.io/kube-controller-manager:v1.32.2
	registry.k8s.io/kube-scheduler:v1.32.2
	registry.k8s.io/kube-proxy:v1.32.2
	registry.k8s.io/etcd:3.5.16-0
	registry.k8s.io/coredns/coredns:v1.11.3
	registry.k8s.io/pause:3.10
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0407 15:09:04.376366    8516 cache_images.go:84] Images are preloaded, skipping loading
	I0407 15:09:04.376366    8516 kubeadm.go:934] updating node { 172.17.91.18 8443 v1.32.2 docker true true} ...
	I0407 15:09:04.376366    8516 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.32.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-003200 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=172.17.91.18
	
	[Install]
	 config:
	{KubernetesVersion:v1.32.2 ClusterName:kubernetes-upgrade-003200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0407 15:09:04.388558    8516 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0407 15:09:04.458840    8516 cni.go:84] Creating CNI manager for ""
	I0407 15:09:04.458840    8516 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 15:09:04.458840    8516 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0407 15:09:04.458840    8516 kubeadm.go:189] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:172.17.91.18 APIServerPort:8443 KubernetesVersion:v1.32.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-003200 NodeName:kubernetes-upgrade-003200 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "172.17.91.18"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:172.17.91.18 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt
StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0407 15:09:04.458840    8516 kubeadm.go:195] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 172.17.91.18
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-003200"
	  kubeletExtraArgs:
	    - name: "node-ip"
	      value: "172.17.91.18"
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta4
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "172.17.91.18"]
	  extraArgs:
	    - name: "enable-admission-plugins"
	      value: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    - name: "allocate-node-cidrs"
	      value: "true"
	    - name: "leader-elect"
	      value: "false"
	scheduler:
	  extraArgs:
	    - name: "leader-elect"
	      value: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      - name: "proxy-refresh-interval"
	        value: "70000"
	kubernetesVersion: v1.32.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0407 15:09:04.470996    8516 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.32.2
	I0407 15:09:04.490747    8516 binaries.go:44] Found k8s binaries, skipping transfer
	I0407 15:09:04.501966    8516 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0407 15:09:04.520020    8516 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (324 bytes)
	I0407 15:09:04.550884    8516 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0407 15:09:04.583143    8516 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2303 bytes)
	I0407 15:09:04.624915    8516 ssh_runner.go:195] Run: grep 172.17.91.18	control-plane.minikube.internal$ /etc/hosts
	I0407 15:09:04.640904    8516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 15:09:04.858612    8516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 15:09:04.890047    8516 certs.go:68] Setting up C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200 for IP: 172.17.91.18
	I0407 15:09:04.890047    8516 certs.go:194] generating shared ca certs ...
	I0407 15:09:04.890266    8516 certs.go:226] acquiring lock for ca certs: {Name:mk09ff4ada22228900e1815c250154c7d8d76854 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 15:09:04.891073    8516 certs.go:235] skipping valid "minikubeCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key
	I0407 15:09:04.891495    8516 certs.go:235] skipping valid "proxyClientCA" ca cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key
	I0407 15:09:04.891713    8516 certs.go:256] generating profile certs ...
	I0407 15:09:04.892396    8516 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\client.key
	I0407 15:09:04.892396    8516 certs.go:359] skipping valid signed profile cert regeneration for "minikube": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\apiserver.key.822298e7
	I0407 15:09:04.893233    8516 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\proxy-client.key
	I0407 15:09:04.894460    8516 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem (1338 bytes)
	W0407 15:09:04.894460    8516 certs.go:480] ignoring C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728_empty.pem, impossibly tiny 0 bytes
	I0407 15:09:04.894460    8516 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem (1679 bytes)
	I0407 15:09:04.895124    8516 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem (1078 bytes)
	I0407 15:09:04.895457    8516 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem (1123 bytes)
	I0407 15:09:04.895810    8516 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem (1679 bytes)
	I0407 15:09:04.895934    8516 certs.go:484] found cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem (1708 bytes)
	I0407 15:09:04.897850    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0407 15:09:04.940380    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0407 15:09:05.045936    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0407 15:09:05.145958    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0407 15:09:05.225388    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0407 15:09:05.296706    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0407 15:09:05.360661    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0407 15:09:05.418961    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\kubernetes-upgrade-003200\proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0407 15:09:05.469977    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0407 15:09:05.537112    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\7728.pem --> /usr/share/ca-certificates/7728.pem (1338 bytes)
	I0407 15:09:05.592793    8516 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\ssl\certs\77282.pem --> /usr/share/ca-certificates/77282.pem (1708 bytes)
	I0407 15:09:05.654384    8516 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0407 15:09:05.705825    8516 ssh_runner.go:195] Run: openssl version
	I0407 15:09:05.727256    8516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0407 15:09:05.759806    8516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0407 15:09:05.767321    8516 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr  7 12:22 /usr/share/ca-certificates/minikubeCA.pem
	I0407 15:09:05.780496    8516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0407 15:09:05.801086    8516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0407 15:09:05.845451    8516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/7728.pem && ln -fs /usr/share/ca-certificates/7728.pem /etc/ssl/certs/7728.pem"
	I0407 15:09:05.879433    8516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/7728.pem
	I0407 15:09:05.886626    8516 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr  7 12:38 /usr/share/ca-certificates/7728.pem
	I0407 15:09:05.898508    8516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/7728.pem
	I0407 15:09:05.918920    8516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/7728.pem /etc/ssl/certs/51391683.0"
	I0407 15:09:05.945329    8516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/77282.pem && ln -fs /usr/share/ca-certificates/77282.pem /etc/ssl/certs/77282.pem"
	I0407 15:09:05.976804    8516 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/77282.pem
	I0407 15:09:05.992395    8516 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr  7 12:38 /usr/share/ca-certificates/77282.pem
	I0407 15:09:06.004399    8516 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/77282.pem
	I0407 15:09:06.036397    8516 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/77282.pem /etc/ssl/certs/3ec20f2e.0"
	I0407 15:09:06.089186    8516 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0407 15:09:06.110461    8516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0407 15:09:06.136472    8516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0407 15:09:06.166047    8516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0407 15:09:06.193123    8516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0407 15:09:06.229808    8516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0407 15:09:06.250092    8516 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0407 15:09:06.263980    8516 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-003200 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:kuber
netes-upgrade-003200 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:172.17.91.18 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:doc
ker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 15:09:06.273973    8516 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 15:09:05.647655   13960 main.go:141] libmachine: [stdout =====>] : 
	Name                State CPUUsage(%) MemoryAssigned(M) Uptime   Status             Version
	----                ----- ----------- ----------------- ------   ------             -------
	docker-flags-422800 Off   0           0                 00:00:00 Operating normally 9.0    
	
	
	
	I0407 15:09:05.647655   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:05.648667   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMMemory -VMName docker-flags-422800 -DynamicMemoryEnabled $false
	I0407 15:09:08.104588   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:09:08.104588   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:08.104864   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMProcessor docker-flags-422800 -Count 2
	I0407 15:09:06.383624    8516 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0407 15:09:06.411364    8516 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0407 15:09:06.411364    8516 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0407 15:09:06.423679    8516 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0407 15:09:06.450705    8516 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0407 15:09:06.452526    8516 kubeconfig.go:125] found "kubernetes-upgrade-003200" server: "https://172.17.91.18:8443"
	I0407 15:09:06.457558    8516 kapi.go:59] client config for kubernetes-upgrade-003200: &rest.Config{Host:"https://172.17.91.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-003200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-003200\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil),
KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 15:09:06.459330    8516 envvar.go:172] "Feature gate default state" feature="WatchListClient" enabled=false
	I0407 15:09:06.459330    8516 envvar.go:172] "Feature gate default state" feature="ClientsAllowCBOR" enabled=false
	I0407 15:09:06.459330    8516 envvar.go:172] "Feature gate default state" feature="ClientsPreferCBOR" enabled=false
	I0407 15:09:06.459330    8516 envvar.go:172] "Feature gate default state" feature="InformerResourceVersion" enabled=false
	I0407 15:09:06.472528    8516 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0407 15:09:06.512186    8516 kubeadm.go:630] The running cluster does not require reconfiguration: 172.17.91.18
	I0407 15:09:06.513193    8516 kubeadm.go:1160] stopping kube-system containers ...
	I0407 15:09:06.523323    8516 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0407 15:09:06.611294    8516 docker.go:483] Stopping containers: [2038a60dbd13 cacec1a0f0ae 2b2c4e6e34c2 320e5b85935b 4a02b90ca954 af413444c564 fb1e0f38b8f9 9f0109a829e3 b3f2ff89db33 09175f5ea78b 22c2aaa16185 bfb8e5284937 385f8adc95f9 dd031145c7eb 4b5ea2505466 4d73f97632ca 53526e192c6c a5eedd36749e 815803fd99f6 b43804c1f5cc 7b052fac72f9 a4154d11f8f5 6f94b43fe3f4 78e5f4e844ca 76aeab9f5e06 a510a79a3be7 aa4714629caf]
	I0407 15:09:06.620416    8516 ssh_runner.go:195] Run: docker stop 2038a60dbd13 cacec1a0f0ae 2b2c4e6e34c2 320e5b85935b 4a02b90ca954 af413444c564 fb1e0f38b8f9 9f0109a829e3 b3f2ff89db33 09175f5ea78b 22c2aaa16185 bfb8e5284937 385f8adc95f9 dd031145c7eb 4b5ea2505466 4d73f97632ca 53526e192c6c a5eedd36749e 815803fd99f6 b43804c1f5cc 7b052fac72f9 a4154d11f8f5 6f94b43fe3f4 78e5f4e844ca 76aeab9f5e06 a510a79a3be7 aa4714629caf
	I0407 15:09:07.648775    8516 ssh_runner.go:235] Completed: docker stop 2038a60dbd13 cacec1a0f0ae 2b2c4e6e34c2 320e5b85935b 4a02b90ca954 af413444c564 fb1e0f38b8f9 9f0109a829e3 b3f2ff89db33 09175f5ea78b 22c2aaa16185 bfb8e5284937 385f8adc95f9 dd031145c7eb 4b5ea2505466 4d73f97632ca 53526e192c6c a5eedd36749e 815803fd99f6 b43804c1f5cc 7b052fac72f9 a4154d11f8f5 6f94b43fe3f4 78e5f4e844ca 76aeab9f5e06 a510a79a3be7 aa4714629caf: (1.0282472s)
	I0407 15:09:07.661937    8516 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0407 15:09:07.751666    8516 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0407 15:09:07.771676    8516 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5651 Apr  7 15:02 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Apr  7 15:02 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 2039 Apr  7 15:03 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5604 Apr  7 15:02 /etc/kubernetes/scheduler.conf
	
	I0407 15:09:07.782532    8516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0407 15:09:07.814290    8516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0407 15:09:07.843938    8516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0407 15:09:07.863727    8516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0407 15:09:07.875241    8516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0407 15:09:07.902669    8516 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0407 15:09:07.921522    8516 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0407 15:09:07.935353    8516 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0407 15:09:07.965845    8516 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0407 15:09:08.010943    8516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 15:09:08.090593    8516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 15:09:09.528789    8516 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.4381845s)
	I0407 15:09:09.528789    8516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0407 15:09:09.862134    8516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 15:09:09.968970    8516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0407 15:09:10.082457    8516 api_server.go:52] waiting for apiserver process to appear ...
	I0407 15:09:10.096109    8516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 15:09:10.594294    8516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 15:09:11.094826    8516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 15:09:10.379631   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:09:10.379631   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:10.380678   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Set-VMDvdDrive -VMName docker-flags-422800 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\docker-flags-422800\boot2docker.iso'
	I0407 15:09:13.103606   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:09:13.104607   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:13.104692   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Add-VMHardDiskDrive -VMName docker-flags-422800 -Path 'C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\docker-flags-422800\disk.vhd'
	I0407 15:09:11.596327    8516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 15:09:12.095966    8516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 15:09:12.120202    8516 api_server.go:72] duration metric: took 2.0377282s to wait for apiserver process to appear ...
	I0407 15:09:12.120202    8516 api_server.go:88] waiting for apiserver healthz status ...
	I0407 15:09:12.120202    8516 api_server.go:253] Checking apiserver healthz at https://172.17.91.18:8443/healthz ...
	I0407 15:09:14.674196    8516 api_server.go:279] https://172.17.91.18:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 15:09:14.674196    8516 api_server.go:103] status: https://172.17.91.18:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 15:09:14.674196    8516 api_server.go:253] Checking apiserver healthz at https://172.17.91.18:8443/healthz ...
	I0407 15:09:14.750720    8516 api_server.go:279] https://172.17.91.18:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0407 15:09:14.750775    8516 api_server.go:103] status: https://172.17.91.18:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0407 15:09:15.120421    8516 api_server.go:253] Checking apiserver healthz at https://172.17.91.18:8443/healthz ...
	I0407 15:09:15.131284    8516 api_server.go:279] https://172.17.91.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 15:09:15.131284    8516 api_server.go:103] status: https://172.17.91.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 15:09:15.620804    8516 api_server.go:253] Checking apiserver healthz at https://172.17.91.18:8443/healthz ...
	I0407 15:09:15.630813    8516 api_server.go:279] https://172.17.91.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 15:09:15.630813    8516 api_server.go:103] status: https://172.17.91.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 15:09:16.120602    8516 api_server.go:253] Checking apiserver healthz at https://172.17.91.18:8443/healthz ...
	I0407 15:09:16.135578    8516 api_server.go:279] https://172.17.91.18:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0407 15:09:16.135690    8516 api_server.go:103] status: https://172.17.91.18:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-local-available-controller ok
	[+]poststarthook/apiservice-status-remote-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0407 15:09:16.621171    8516 api_server.go:253] Checking apiserver healthz at https://172.17.91.18:8443/healthz ...
	I0407 15:09:16.638553    8516 api_server.go:279] https://172.17.91.18:8443/healthz returned 200:
	ok
	I0407 15:09:16.694312    8516 api_server.go:141] control plane version: v1.32.2
	I0407 15:09:16.694397    8516 api_server.go:131] duration metric: took 4.574157s to wait for apiserver health ...
	I0407 15:09:16.694397    8516 cni.go:84] Creating CNI manager for ""
	I0407 15:09:16.694397    8516 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 15:09:16.698092    8516 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0407 15:09:16.712323    8516 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0407 15:09:16.765086    8516 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0407 15:09:16.868482    8516 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 15:09:16.880409    8516 system_pods.go:59] 8 kube-system pods found
	I0407 15:09:16.880409    8516 system_pods.go:61] "coredns-668d6bf9bc-d2kfp" [20f42afb-ea85-478f-89fe-755406b53f56] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 15:09:16.880409    8516 system_pods.go:61] "coredns-668d6bf9bc-xcbrp" [e451f61e-f2dc-4468-9023-2682a6686221] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0407 15:09:16.880409    8516 system_pods.go:61] "etcd-kubernetes-upgrade-003200" [4468a83b-53ec-4aad-b8f0-3202dd9571b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 15:09:16.880409    8516 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-003200" [970ac155-25d5-4377-aa36-7bc386d888b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 15:09:16.880409    8516 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-003200" [daf34d4a-d30d-42f9-b8a3-f9dfd3b1dce0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 15:09:16.880409    8516 system_pods.go:61] "kube-proxy-7rcqc" [8eff4c65-a885-401c-9503-e693fde72999] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0407 15:09:16.880409    8516 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-003200" [7d0eebbe-a168-49a4-a3e3-220d86e4b6a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 15:09:16.880409    8516 system_pods.go:61] "storage-provisioner" [ade833e1-ab68-472c-88b1-8258864314c0] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0407 15:09:16.880409    8516 system_pods.go:74] duration metric: took 11.9272ms to wait for pod list to return data ...
	I0407 15:09:16.880409    8516 node_conditions.go:102] verifying NodePressure condition ...
	I0407 15:09:16.897771    8516 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 15:09:16.897848    8516 node_conditions.go:123] node cpu capacity is 2
	I0407 15:09:16.897848    8516 node_conditions.go:105] duration metric: took 17.4387ms to run NodePressure ...
	I0407 15:09:16.897848    8516 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.32.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0407 15:09:17.738807    8516 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0407 15:09:17.761207    8516 ops.go:34] apiserver oom_adj: -16
	I0407 15:09:17.761243    8516 kubeadm.go:597] duration metric: took 11.3497844s to restartPrimaryControlPlane
	I0407 15:09:17.761243    8516 kubeadm.go:394] duration metric: took 11.497168s to StartCluster
	I0407 15:09:17.761243    8516 settings.go:142] acquiring lock: {Name:mke99fb8c09012609ce6804e7dfd4d68f5541df7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 15:09:17.761243    8516 settings.go:150] Updating kubeconfig:  C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 15:09:17.763754    8516 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\kubeconfig: {Name:mk966a7640504e03827322930a51a762b5508893 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 15:09:17.764406    8516 start.go:235] Will wait 6m0s for node &{Name: IP:172.17.91.18 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 15:09:17.764406    8516 addons.go:511] enable addons start: toEnable=map[ambassador:false amd-gpu-device-plugin:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0407 15:09:17.764406    8516 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-003200"
	I0407 15:09:17.764406    8516 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-003200"
	I0407 15:09:17.765718    8516 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-003200"
	I0407 15:09:17.765806    8516 config.go:182] Loaded profile config "kubernetes-upgrade-003200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:09:17.765887    8516 addons.go:238] Setting addon storage-provisioner=true in "kubernetes-upgrade-003200"
	W0407 15:09:17.765930    8516 addons.go:247] addon storage-provisioner should already be in state true
	I0407 15:09:17.766108    8516 host.go:66] Checking if "kubernetes-upgrade-003200" exists ...
	I0407 15:09:17.766947    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:09:17.768027    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:09:17.768027    8516 out.go:177] * Verifying Kubernetes components...
	I0407 15:09:15.872578   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:09:15.872578   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:15.872578   13960 main.go:141] libmachine: Starting VM...
	I0407 15:09:15.872578   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive Hyper-V\Start-VM docker-flags-422800
	I0407 15:09:19.146220   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:09:19.146960   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:19.147041   13960 main.go:141] libmachine: Waiting for host to start...
	I0407 15:09:19.147137   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-422800 ).state
	I0407 15:09:17.785571    8516 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 15:09:18.156902    8516 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 15:09:18.202901    8516 api_server.go:52] waiting for apiserver process to appear ...
	I0407 15:09:18.220385    8516 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 15:09:18.260777    8516 api_server.go:72] duration metric: took 496.3668ms to wait for apiserver process to appear ...
	I0407 15:09:18.260934    8516 api_server.go:88] waiting for apiserver healthz status ...
	I0407 15:09:18.260934    8516 api_server.go:253] Checking apiserver healthz at https://172.17.91.18:8443/healthz ...
	I0407 15:09:18.281312    8516 api_server.go:279] https://172.17.91.18:8443/healthz returned 200:
	ok
	I0407 15:09:18.286613    8516 api_server.go:141] control plane version: v1.32.2
	I0407 15:09:18.286613    8516 api_server.go:131] duration metric: took 25.6791ms to wait for apiserver health ...
	I0407 15:09:18.286613    8516 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 15:09:18.301540    8516 system_pods.go:59] 8 kube-system pods found
	I0407 15:09:18.301540    8516 system_pods.go:61] "coredns-668d6bf9bc-d2kfp" [20f42afb-ea85-478f-89fe-755406b53f56] Running
	I0407 15:09:18.301540    8516 system_pods.go:61] "coredns-668d6bf9bc-xcbrp" [e451f61e-f2dc-4468-9023-2682a6686221] Running
	I0407 15:09:18.301633    8516 system_pods.go:61] "etcd-kubernetes-upgrade-003200" [4468a83b-53ec-4aad-b8f0-3202dd9571b1] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0407 15:09:18.301633    8516 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-003200" [970ac155-25d5-4377-aa36-7bc386d888b9] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0407 15:09:18.301633    8516 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-003200" [daf34d4a-d30d-42f9-b8a3-f9dfd3b1dce0] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0407 15:09:18.301633    8516 system_pods.go:61] "kube-proxy-7rcqc" [8eff4c65-a885-401c-9503-e693fde72999] Running
	I0407 15:09:18.301633    8516 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-003200" [7d0eebbe-a168-49a4-a3e3-220d86e4b6a2] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0407 15:09:18.301633    8516 system_pods.go:61] "storage-provisioner" [ade833e1-ab68-472c-88b1-8258864314c0] Running
	I0407 15:09:18.301633    8516 system_pods.go:74] duration metric: took 15.0196ms to wait for pod list to return data ...
	I0407 15:09:18.301633    8516 kubeadm.go:582] duration metric: took 537.2228ms to wait for: map[apiserver:true system_pods:true]
	I0407 15:09:18.301788    8516 node_conditions.go:102] verifying NodePressure condition ...
	I0407 15:09:18.314431    8516 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 15:09:18.314431    8516 node_conditions.go:123] node cpu capacity is 2
	I0407 15:09:18.314431    8516 node_conditions.go:105] duration metric: took 12.643ms to run NodePressure ...
	I0407 15:09:18.314431    8516 start.go:241] waiting for startup goroutines ...
	I0407 15:09:20.259882    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:09:20.259987    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:20.261153    8516 kapi.go:59] client config for kubernetes-upgrade-003200: &rest.Config{Host:"https://172.17.91.18:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-003200\\client.crt", KeyFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\kubernetes-upgrade-003200\\client.key", CAFile:"C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\ca.crt", CertData:[]uint8(nil),
KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x2be92e0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0407 15:09:20.262153    8516 addons.go:238] Setting addon default-storageclass=true in "kubernetes-upgrade-003200"
	W0407 15:09:20.262153    8516 addons.go:247] addon default-storageclass should already be in state true
	I0407 15:09:20.262153    8516 host.go:66] Checking if "kubernetes-upgrade-003200" exists ...
	I0407 15:09:20.263156    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:09:20.285585    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:09:20.285670    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:20.296284    8516 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0407 15:09:20.299252    8516 addons.go:435] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 15:09:20.299252    8516 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0407 15:09:20.299252    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:09:21.795779   13960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:09:21.795779   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:21.795779   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-422800 ).networkadapters[0]).ipaddresses[0]
	I0407 15:09:24.595405   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:09:24.595405   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:22.803090    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:09:22.803236    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:22.803349    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 15:09:22.837296    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:09:22.837296    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:22.837296    8516 addons.go:435] installing /etc/kubernetes/addons/storageclass.yaml
	I0407 15:09:22.837296    8516 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0407 15:09:22.837296    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM kubernetes-upgrade-003200 ).state
	I0407 15:09:25.178972    8516 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:09:25.179348    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:25.179428    8516 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM kubernetes-upgrade-003200 ).networkadapters[0]).ipaddresses[0]
	I0407 15:09:25.667467    8516 main.go:141] libmachine: [stdout =====>] : 172.17.91.18
	
	I0407 15:09:25.667594    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:25.667749    8516 sshutil.go:53] new ssh client: &{IP:172.17.91.18 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\id_rsa Username:docker}
	I0407 15:09:25.817372    8516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0407 15:09:27.941911    8516 main.go:141] libmachine: [stdout =====>] : 172.17.91.18
	
	I0407 15:09:27.941911    8516 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:27.942247    8516 sshutil.go:53] new ssh client: &{IP:172.17.91.18 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\kubernetes-upgrade-003200\id_rsa Username:docker}
	I0407 15:09:28.108506    8516 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.32.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0407 15:09:28.319359    8516 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0407 15:09:28.321360    8516 addons.go:514] duration metric: took 10.5568671s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0407 15:09:28.322361    8516 start.go:246] waiting for cluster config update ...
	I0407 15:09:28.322361    8516 start.go:255] writing updated cluster config ...
	I0407 15:09:28.336359    8516 ssh_runner.go:195] Run: rm -f paused
	I0407 15:09:28.511382    8516 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 15:09:28.521599    8516 out.go:177] * Done! kubectl is now configured to use "kubernetes-upgrade-003200" cluster and "default" namespace by default
	I0407 15:09:25.596187   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-422800 ).state
	I0407 15:09:27.945293   13960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:09:27.945828   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:27.945977   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-422800 ).networkadapters[0]).ipaddresses[0]
	I0407 15:09:30.766475   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:09:30.767050   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:31.767456   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-422800 ).state
	I0407 15:09:34.088935   13960 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:09:34.089392   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:34.089471   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM docker-flags-422800 ).networkadapters[0]).ipaddresses[0]
	I0407 15:09:36.738963   13960 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:09:36.738963   13960 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:09:37.739972   13960 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM docker-flags-422800 ).state
	
	
	==> Docker <==
	Apr 07 15:09:15 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:15.888075045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:15 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:15.939153358Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:09:15 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:15.939406451Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:09:15 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:15.939610546Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:15 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:15.940096434Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:16 kubernetes-upgrade-003200 cri-dockerd[5564]: time="2025-04-07T15:09:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/d00fe6ddd83aedcf5013f915511058837b5999afce26be19288a55e7ab29a34d/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 15:09:16 kubernetes-upgrade-003200 cri-dockerd[5564]: time="2025-04-07T15:09:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/72a3580f72fbf365605007e882277b2f5b32b275e96786f021e37111dc62a51f/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 15:09:16 kubernetes-upgrade-003200 cri-dockerd[5564]: time="2025-04-07T15:09:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7c5e8002a0f0ca42f1149b1188ad21bc498f8a6c3875a9b225d6ba178a9f9987/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 15:09:16 kubernetes-upgrade-003200 cri-dockerd[5564]: time="2025-04-07T15:09:16Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1276a458a458cfbb66575edfb09b577cd338d4ee8a0e636985a624577ba1889b/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 15:09:16 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:16.573124860Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:09:16 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:16.573326755Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:09:16 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:16.573351554Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:16 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:16.579609997Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:16 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:16.590623220Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:09:16 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:16.590696618Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:09:16 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:16.590723118Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:16 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:16.590852614Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:17 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:17.099320086Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:09:17 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:17.107640564Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:09:17 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:17.107846258Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:17 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:17.108597738Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:17 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:17.134589745Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:09:17 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:17.134664743Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:09:17 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:17.134690342Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:09:17 kubernetes-upgrade-003200 dockerd[5282]: time="2025-04-07T15:09:17.134813139Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	b3ba0604c3f38       c69fa2e9cbf5f       33 seconds ago      Running             coredns                   1                   1276a458a458c       coredns-668d6bf9bc-d2kfp
	96b948e362d96       c69fa2e9cbf5f       33 seconds ago      Running             coredns                   1                   7c5e8002a0f0c       coredns-668d6bf9bc-xcbrp
	c0eee07ae1898       f1332858868e1       33 seconds ago      Running             kube-proxy                2                   72a3580f72fbf       kube-proxy-7rcqc
	7bff0d17bd5bc       6e38f40d628db       33 seconds ago      Running             storage-provisioner       2                   d00fe6ddd83ae       storage-provisioner
	0f226fc19f690       85b7a174738ba       38 seconds ago      Running             kube-apiserver            2                   6a0933cf12441       kube-apiserver-kubernetes-upgrade-003200
	04113a09d8f89       d8e673e7c9983       38 seconds ago      Running             kube-scheduler            2                   f12474a483519       kube-scheduler-kubernetes-upgrade-003200
	f0d92c6a57c03       b6a454c5a800d       38 seconds ago      Running             kube-controller-manager   2                   45f3d21ce6fda       kube-controller-manager-kubernetes-upgrade-003200
	a1e098ad4429e       a9e7e6b294baf       38 seconds ago      Running             etcd                      2                   3a3bdf611de63       etcd-kubernetes-upgrade-003200
	70509242cf28e       6e38f40d628db       42 seconds ago      Created             storage-provisioner       1                   2b2c4e6e34c26       storage-provisioner
	00af04923e5a0       f1332858868e1       43 seconds ago      Created             kube-proxy                1                   4a02b90ca954e       kube-proxy-7rcqc
	f3659ba7c8ca7       d8e673e7c9983       43 seconds ago      Created             kube-scheduler            1                   320e5b85935b5       kube-scheduler-kubernetes-upgrade-003200
	e91ee3641537d       85b7a174738ba       43 seconds ago      Created             kube-apiserver            1                   fb1e0f38b8f9b       kube-apiserver-kubernetes-upgrade-003200
	9ce60b56a83ae       b6a454c5a800d       43 seconds ago      Created             kube-controller-manager   1                   9f0109a829e38       kube-controller-manager-kubernetes-upgrade-003200
	7100bd4883443       a9e7e6b294baf       43 seconds ago      Created             etcd                      1                   af413444c5642       etcd-kubernetes-upgrade-003200
	dd031145c7ebb       c69fa2e9cbf5f       6 minutes ago       Exited              coredns                   0                   4d73f97632cae       coredns-668d6bf9bc-d2kfp
	4b5ea2505466e       c69fa2e9cbf5f       6 minutes ago       Exited              coredns                   0                   53526e192c6c5       coredns-668d6bf9bc-xcbrp
	
	
	==> coredns [4b5ea2505466] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [96b948e362d9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [b3ba0604c3f3] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	
	
	==> coredns [dd031145c7eb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-003200
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-003200
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=kubernetes-upgrade-003200
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T15_03_03_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 15:02:59 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-003200
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 15:09:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 15:09:14 +0000   Mon, 07 Apr 2025 15:02:57 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 15:09:14 +0000   Mon, 07 Apr 2025 15:02:57 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 15:09:14 +0000   Mon, 07 Apr 2025 15:02:57 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 15:09:14 +0000   Mon, 07 Apr 2025 15:02:59 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.91.18
	  Hostname:    kubernetes-upgrade-003200
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164264Ki
	  pods:               110
	System Info:
	  Machine ID:                 862842c80d464592af39f02c3c732fec
	  System UUID:                0c296c8c-2a49-a44b-bc03-583f5b58c771
	  Boot ID:                    a6010427-3897-4eea-b1d4-974a97af9458
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-d2kfp                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m41s
	  kube-system                 coredns-668d6bf9bc-xcbrp                             100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m41s
	  kube-system                 etcd-kubernetes-upgrade-003200                       100m (5%)     0 (0%)      100Mi (4%)       0 (0%)         6m46s
	  kube-system                 kube-apiserver-kubernetes-upgrade-003200             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-003200    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m48s
	  kube-system                 kube-proxy-7rcqc                                     0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m42s
	  kube-system                 kube-scheduler-kubernetes-upgrade-003200             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m46s
	  kube-system                 storage-provisioner                                  0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m36s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%)   0 (0%)
	  memory             240Mi (11%)  340Mi (16%)
	  ephemeral-storage  0 (0%)       0 (0%)
	  hugepages-2Mi      0 (0%)       0 (0%)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 6m39s              kube-proxy       
	  Normal  Starting                 32s                kube-proxy       
	  Normal  NodeHasSufficientPID     6m46s              kubelet          Node kubernetes-upgrade-003200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m46s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m46s              kubelet          Node kubernetes-upgrade-003200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m46s              kubelet          Node kubernetes-upgrade-003200 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m46s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           6m42s              node-controller  Node kubernetes-upgrade-003200 event: Registered Node kubernetes-upgrade-003200 in Controller
	  Normal  Starting                 39s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  39s (x8 over 39s)  kubelet          Node kubernetes-upgrade-003200 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    39s (x8 over 39s)  kubelet          Node kubernetes-upgrade-003200 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     39s (x7 over 39s)  kubelet          Node kubernetes-upgrade-003200 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  39s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                node-controller  Node kubernetes-upgrade-003200 event: Registered Node kubernetes-upgrade-003200 in Controller
	
	
	==> dmesg <==
	[  +4.927071] systemd-fstab-generator[1923]: Ignoring "noauto" option for root device
	[  +0.112753] kauditd_printk_skb: 52 callbacks suppressed
	[  +9.331675] systemd-fstab-generator[2124]: Ignoring "noauto" option for root device
	[  +0.112452] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 7 15:03] systemd-fstab-generator[2590]: Ignoring "noauto" option for root device
	[  +0.143899] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.686008] systemd-fstab-generator[2667]: Ignoring "noauto" option for root device
	[  +3.831949] kauditd_printk_skb: 34 callbacks suppressed
	[  +5.452741] kauditd_printk_skb: 66 callbacks suppressed
	[Apr 7 15:08] systemd-fstab-generator[4747]: Ignoring "noauto" option for root device
	[  +0.169643] kauditd_printk_skb: 2 callbacks suppressed
	[  +0.496966] systemd-fstab-generator[4783]: Ignoring "noauto" option for root device
	[  +0.279073] systemd-fstab-generator[4795]: Ignoring "noauto" option for root device
	[  +0.302436] systemd-fstab-generator[4809]: Ignoring "noauto" option for root device
	[  +5.405212] kauditd_printk_skb: 89 callbacks suppressed
	[Apr 7 15:09] systemd-fstab-generator[5508]: Ignoring "noauto" option for root device
	[  +0.234674] systemd-fstab-generator[5520]: Ignoring "noauto" option for root device
	[  +0.216900] systemd-fstab-generator[5532]: Ignoring "noauto" option for root device
	[  +0.314154] systemd-fstab-generator[5552]: Ignoring "noauto" option for root device
	[  +0.940000] systemd-fstab-generator[5725]: Ignoring "noauto" option for root device
	[  +0.398825] kauditd_printk_skb: 141 callbacks suppressed
	[  +4.598585] systemd-fstab-generator[6685]: Ignoring "noauto" option for root device
	[  +1.231331] kauditd_printk_skb: 64 callbacks suppressed
	[  +5.025218] kauditd_printk_skb: 35 callbacks suppressed
	[  +1.917381] systemd-fstab-generator[7997]: Ignoring "noauto" option for root device
	
	
	==> etcd [7100bd488344] <==
	
	
	==> etcd [a1e098ad4429] <==
	{"level":"info","ts":"2025-04-07T15:09:11.782523Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96545605d23a4ec9 switched to configuration voters=(10832377586734747337)"}
	{"level":"info","ts":"2025-04-07T15:09:11.782629Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7d8a6a589f809cde","local-member-id":"96545605d23a4ec9","added-peer-id":"96545605d23a4ec9","added-peer-peer-urls":["https://172.17.91.18:2380"]}
	{"level":"info","ts":"2025-04-07T15:09:11.782722Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7d8a6a589f809cde","local-member-id":"96545605d23a4ec9","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T15:09:11.782760Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T15:09:11.787235Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T15:09:11.787554Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"96545605d23a4ec9","initial-advertise-peer-urls":["https://172.17.91.18:2380"],"listen-peer-urls":["https://172.17.91.18:2380"],"advertise-client-urls":["https://172.17.91.18:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.91.18:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T15:09:11.787585Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T15:09:11.787657Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.17.91.18:2380"}
	{"level":"info","ts":"2025-04-07T15:09:11.787666Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.17.91.18:2380"}
	{"level":"info","ts":"2025-04-07T15:09:13.126719Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96545605d23a4ec9 is starting a new election at term 2"}
	{"level":"info","ts":"2025-04-07T15:09:13.127992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96545605d23a4ec9 became pre-candidate at term 2"}
	{"level":"info","ts":"2025-04-07T15:09:13.128819Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96545605d23a4ec9 received MsgPreVoteResp from 96545605d23a4ec9 at term 2"}
	{"level":"info","ts":"2025-04-07T15:09:13.130120Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96545605d23a4ec9 became candidate at term 3"}
	{"level":"info","ts":"2025-04-07T15:09:13.130334Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96545605d23a4ec9 received MsgVoteResp from 96545605d23a4ec9 at term 3"}
	{"level":"info","ts":"2025-04-07T15:09:13.131315Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"96545605d23a4ec9 became leader at term 3"}
	{"level":"info","ts":"2025-04-07T15:09:13.131341Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 96545605d23a4ec9 elected leader 96545605d23a4ec9 at term 3"}
	{"level":"info","ts":"2025-04-07T15:09:13.140460Z","caller":"etcdserver/server.go:2140","msg":"published local member to cluster through raft","local-member-id":"96545605d23a4ec9","local-member-attributes":"{Name:kubernetes-upgrade-003200 ClientURLs:[https://172.17.91.18:2379]}","request-path":"/0/members/96545605d23a4ec9/attributes","cluster-id":"7d8a6a589f809cde","publish-timeout":"7s"}
	{"level":"info","ts":"2025-04-07T15:09:13.140682Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T15:09:13.140704Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2025-04-07T15:09:13.144121Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T15:09:13.145872Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2025-04-07T15:09:13.147483Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2025-04-07T15:09:13.147794Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"172.17.91.18:2379"}
	{"level":"info","ts":"2025-04-07T15:09:13.148582Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T15:09:13.159285Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 15:09:50 up 9 min,  0 users,  load average: 0.85, 0.54, 0.28
	Linux kubernetes-upgrade-003200 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [0f226fc19f69] <==
	I0407 15:09:14.826095       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0407 15:09:14.826770       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0407 15:09:14.833160       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0407 15:09:14.834114       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0407 15:09:14.834411       1 shared_informer.go:320] Caches are synced for configmaps
	I0407 15:09:14.835020       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0407 15:09:14.835107       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0407 15:09:14.835121       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0407 15:09:14.835207       1 aggregator.go:171] initial CRD sync complete...
	I0407 15:09:14.835713       1 autoregister_controller.go:144] Starting autoregister controller
	I0407 15:09:14.835747       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0407 15:09:14.836309       1 cache.go:39] Caches are synced for autoregister controller
	E0407 15:09:14.845227       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0407 15:09:14.857528       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0407 15:09:14.863206       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0407 15:09:14.863305       1 policy_source.go:240] refreshing policies
	I0407 15:09:14.883861       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 15:09:15.170272       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 15:09:15.639828       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0407 15:09:17.372717       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 15:09:17.521389       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 15:09:17.703166       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 15:09:17.731612       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 15:09:18.086265       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 15:09:18.314338       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-apiserver [e91ee3641537] <==
	
	
	==> kube-controller-manager [9ce60b56a83a] <==
	
	
	==> kube-controller-manager [f0d92c6a57c0] <==
	I0407 15:09:18.095295       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0407 15:09:18.095742       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0407 15:09:18.072199       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-client
	I0407 15:09:18.126932       1 shared_informer.go:320] Caches are synced for persistent volume
	I0407 15:09:18.127828       1 shared_informer.go:320] Caches are synced for TTL
	I0407 15:09:18.127866       1 shared_informer.go:320] Caches are synced for PVC protection
	I0407 15:09:18.072205       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kube-apiserver-client
	I0407 15:09:18.073047       1 shared_informer.go:320] Caches are synced for garbage collector
	I0407 15:09:18.128118       1 garbagecollector.go:154] "Garbage collector: all resource monitors have synced" logger="garbage-collector-controller"
	I0407 15:09:18.128128       1 garbagecollector.go:157] "Proceeding to collect garbage" logger="garbage-collector-controller"
	I0407 15:09:18.080356       1 shared_informer.go:320] Caches are synced for node
	I0407 15:09:18.128392       1 range_allocator.go:177] "Sending events to api server" logger="node-ipam-controller"
	I0407 15:09:18.131854       1 range_allocator.go:183] "Starting range CIDR allocator" logger="node-ipam-controller"
	I0407 15:09:18.134408       1 shared_informer.go:313] Waiting for caches to sync for cidrallocator
	I0407 15:09:18.134621       1 shared_informer.go:320] Caches are synced for cidrallocator
	I0407 15:09:18.135021       1 shared_informer.go:320] Caches are synced for deployment
	I0407 15:09:18.139781       1 shared_informer.go:320] Caches are synced for ReplicationController
	I0407 15:09:18.072190       1 shared_informer.go:320] Caches are synced for certificate-csrsigning-kubelet-serving
	I0407 15:09:18.097295       1 shared_informer.go:320] Caches are synced for certificate-csrapproving
	I0407 15:09:18.112200       1 shared_informer.go:320] Caches are synced for attach detach
	I0407 15:09:18.112223       1 shared_informer.go:320] Caches are synced for disruption
	I0407 15:09:18.112237       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0407 15:09:18.191276       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="157.195µs"
	I0407 15:09:18.194298       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="kubernetes-upgrade-003200"
	I0407 15:09:18.096408       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	
	
	==> kube-proxy [00af04923e5a] <==
	
	
	==> kube-proxy [c0eee07ae189] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 15:09:17.276513       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 15:09:17.317378       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.17.91.18"]
	E0407 15:09:17.318540       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 15:09:17.430721       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 15:09:17.430917       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 15:09:17.430949       1 server_linux.go:170] "Using iptables Proxier"
	I0407 15:09:17.437239       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 15:09:17.437549       1 server.go:497] "Version info" version="v1.32.2"
	I0407 15:09:17.437565       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 15:09:17.440615       1 config.go:199] "Starting service config controller"
	I0407 15:09:17.440688       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 15:09:17.440715       1 config.go:105] "Starting endpoint slice config controller"
	I0407 15:09:17.440723       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 15:09:17.441701       1 config.go:329] "Starting node config controller"
	I0407 15:09:17.442002       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 15:09:17.541847       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0407 15:09:17.541951       1 shared_informer.go:320] Caches are synced for service config
	I0407 15:09:17.542804       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [04113a09d8f8] <==
	I0407 15:09:12.881967       1 serving.go:386] Generated self-signed cert in-memory
	W0407 15:09:14.718225       1 requestheader_controller.go:204] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0407 15:09:14.718297       1 authentication.go:397] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0407 15:09:14.718310       1 authentication.go:398] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0407 15:09:14.718319       1 authentication.go:399] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0407 15:09:14.802412       1 server.go:166] "Starting Kubernetes Scheduler" version="v1.32.2"
	I0407 15:09:14.802486       1 server.go:168] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 15:09:14.807516       1 configmap_cafile_content.go:205] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0407 15:09:14.807809       1 tlsconfig.go:243] "Starting DynamicServingCertificateController"
	I0407 15:09:14.807783       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0407 15:09:14.808154       1 shared_informer.go:313] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0407 15:09:14.910110       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [f3659ba7c8ca] <==
	
	
	==> kubelet <==
	Apr 07 15:09:14 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:14.914685    6692 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 07 15:09:14 kubernetes-upgrade-003200 kubelet[6692]: E0407 15:09:14.941211    6692 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-003200\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-003200"
	Apr 07 15:09:14 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:14.941322    6692 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-003200"
	Apr 07 15:09:14 kubernetes-upgrade-003200 kubelet[6692]: E0407 15:09:14.955149    6692 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-003200\" already exists" pod="kube-system/etcd-kubernetes-upgrade-003200"
	Apr 07 15:09:14 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:14.955315    6692 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-003200"
	Apr 07 15:09:14 kubernetes-upgrade-003200 kubelet[6692]: E0407 15:09:14.967778    6692 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-003200\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-003200"
	Apr 07 15:09:14 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:14.967820    6692 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-003200"
	Apr 07 15:09:14 kubernetes-upgrade-003200 kubelet[6692]: E0407 15:09:14.980695    6692 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-003200\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-003200"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:15.043292    6692 apiserver.go:52] "Watching apiserver"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:15.067154    6692 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:15.138794    6692 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-kubernetes-upgrade-003200"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:15.139280    6692 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-kubernetes-upgrade-003200"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:15.139715    6692 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-kubernetes-upgrade-003200"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:15.140026    6692 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-kubernetes-upgrade-003200"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:15.156767    6692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8eff4c65-a885-401c-9503-e693fde72999-xtables-lock\") pod \"kube-proxy-7rcqc\" (UID: \"8eff4c65-a885-401c-9503-e693fde72999\") " pod="kube-system/kube-proxy-7rcqc"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:15.157111    6692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/ade833e1-ab68-472c-88b1-8258864314c0-tmp\") pod \"storage-provisioner\" (UID: \"ade833e1-ab68-472c-88b1-8258864314c0\") " pod="kube-system/storage-provisioner"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:15.157209    6692 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8eff4c65-a885-401c-9503-e693fde72999-lib-modules\") pod \"kube-proxy-7rcqc\" (UID: \"8eff4c65-a885-401c-9503-e693fde72999\") " pod="kube-system/kube-proxy-7rcqc"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: E0407 15:09:15.171363    6692 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-kubernetes-upgrade-003200\" already exists" pod="kube-system/kube-controller-manager-kubernetes-upgrade-003200"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: E0407 15:09:15.179952    6692 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-kubernetes-upgrade-003200\" already exists" pod="kube-system/etcd-kubernetes-upgrade-003200"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: E0407 15:09:15.180823    6692 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-kubernetes-upgrade-003200\" already exists" pod="kube-system/kube-apiserver-kubernetes-upgrade-003200"
	Apr 07 15:09:15 kubernetes-upgrade-003200 kubelet[6692]: E0407 15:09:15.181130    6692 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-kubernetes-upgrade-003200\" already exists" pod="kube-system/kube-scheduler-kubernetes-upgrade-003200"
	Apr 07 15:09:16 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:16.228544    6692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="72a3580f72fbf365605007e882277b2f5b32b275e96786f021e37111dc62a51f"
	Apr 07 15:09:16 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:16.246522    6692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d00fe6ddd83aedcf5013f915511058837b5999afce26be19288a55e7ab29a34d"
	Apr 07 15:09:16 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:16.531714    6692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1276a458a458cfbb66575edfb09b577cd338d4ee8a0e636985a624577ba1889b"
	Apr 07 15:09:16 kubernetes-upgrade-003200 kubelet[6692]: I0407 15:09:16.553684    6692 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7c5e8002a0f0ca42f1149b1188ad21bc498f8a6c3875a9b225d6ba178a9f9987"
	
	
	==> storage-provisioner [70509242cf28] <==
	
	
	==> storage-provisioner [7bff0d17bd5b] <==
	I0407 15:09:16.809224       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0407 15:09:16.877255       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0407 15:09:16.877323       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0407 15:09:34.311801       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0407 15:09:34.312343       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f7e6529f-1e16-43f2-87bf-762a7305a13e", APIVersion:"v1", ResourceVersion:"741", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-003200_d0baa1f2-9354-4812-9035-8d8085eeba87 became leader
	I0407 15:09:34.312721       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-003200_d0baa1f2-9354-4812-9035-8d8085eeba87!
	I0407 15:09:34.413003       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-003200_d0baa1f2-9354-4812-9035-8d8085eeba87!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-003200 -n kubernetes-upgrade-003200
helpers_test.go:254: (dbg) Done: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p kubernetes-upgrade-003200 -n kubernetes-upgrade-003200: (12.4229713s)
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-003200 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-003200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-003200
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p kubernetes-upgrade-003200: (40.7855033s)
--- FAIL: TestKubernetesUpgrade (1297.29s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (303s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-817400 --driver=hyperv
no_kubernetes_test.go:95: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-817400 --driver=hyperv: exit status 1 (4m59.6815108s)

                                                
                                                
-- stdout --
	* [NoKubernetes-817400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	* Using the hyperv driver based on user configuration
	* Starting "NoKubernetes-817400" primary control-plane node in "NoKubernetes-817400" cluster
	* Creating hyperv VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...

                                                
                                                
-- /stdout --
no_kubernetes_test.go:97: failed to start minikube with args: "out/minikube-windows-amd64.exe start -p NoKubernetes-817400 --driver=hyperv" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-817400 -n NoKubernetes-817400
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p NoKubernetes-817400 -n NoKubernetes-817400: exit status 7 (3.3126198s)

                                                
                                                
-- stdout --
	Nonexistent

                                                
                                                
-- /stdout --
** stderr ** 
	E0407 14:49:24.173289    2496 main.go:137] libmachine: [stderr =====>] : Hyper-V\Get-VM : Hyper-V was unable to find a virtual machine with name "NoKubernetes-817400".
	At line:1 char:3
	+ ( Hyper-V\Get-VM NoKubernetes-817400 ).state
	+   ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
	    + CategoryInfo          : InvalidArgument: (NoKubernetes-817400:String) [Get-VM], VirtualizationException
	    + FullyQualifiedErrorId : InvalidParameter,Microsoft.HyperV.PowerShell.Commands.GetVM
	 
	

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 7 (may be ok)
helpers_test.go:241: "NoKubernetes-817400" host is not running, skipping log retrieval (state="Nonexistent")
--- FAIL: TestNoKubernetes/serial/StartWithK8s (303.00s)

                                                
                                    
x
+
TestPause/serial/Unpause (80.89s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p pause-061700 --alsologtostderr -v=5
pause_test.go:121: (dbg) Non-zero exit: out/minikube-windows-amd64.exe unpause -p pause-061700 --alsologtostderr -v=5: exit status 1 (8.3866653s)

                                                
                                                
-- stdout --
	* Unpausing node pause-061700 ... 

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 15:06:17.522378    4720 out.go:345] Setting OutFile to fd 1508 ...
	I0407 15:06:17.619416    4720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 15:06:17.619416    4720 out.go:358] Setting ErrFile to fd 1960...
	I0407 15:06:17.619416    4720 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 15:06:17.643262    4720 mustload.go:65] Loading cluster: pause-061700
	I0407 15:06:17.644124    4720 config.go:182] Loaded profile config "pause-061700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:06:17.644995    4720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061700 ).state
	I0407 15:06:20.001474    4720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:20.001682    4720 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:20.001682    4720 host.go:66] Checking if "pause-061700" exists ...
	I0407 15:06:20.002404    4720 out.go:352] Setting JSON to false
	I0407 15:06:20.003529    4720 unpause.go:53] namespaces: [kube-system kubernetes-dashboard storage-gluster istio-operator] keys: map[addons:[] all:false apiserver-ips:[] apiserver-name:minikubeCA apiserver-names:[] apiserver-port:8443 auto-pause-interval:1m0s auto-update-drivers:true base-image:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 binary-mirror: bootstrapper:kubeadm cache-images:true cancel-scheduled:false cert-expiration:26280h0m0s cni: container-runtime: cpus:2 cri-socket: delete-on-failure:false disable-driver-mounts:false disable-metrics:false disable-optimizations:false disk-size:20000mb dns-domain:cluster.local dns-proxy:false docker-env:[] docker-opt:[] download-only:false driver: dry-run:false embed-certs:false embedcerts:false enable-default-cni:false extra-config: extra-disks:0 feature-gates: force:false force-systemd:false gpus: ha:false host-dns-resolver:true host-only-cidr:192.168.59.1/24 host-
only-nic-type:virtio hyperkit-vpnkit-sock: hyperkit-vsock-ports:[] hyperv-external-adapter: hyperv-use-external-switch:false hyperv-virtual-switch: image-mirror-country: image-repository: insecure-registry:[] install-addons:true interactive:true iso-url:[https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso https://github.com/kubernetes/minikube/releases/download/v1.35.0/minikube-v1.35.0-amd64.iso https://kubernetes.oss-cn-hangzhou.aliyuncs.com/minikube/iso/minikube-v1.35.0-amd64.iso] keep-context:false keep-context-active:false kubernetes-version: kvm-gpu:false kvm-hidden:false kvm-network:default kvm-numa-count:1 kvm-qemu-uri:qemu:///system listen-address: maxauditentries:1000 memory: mount:false mount-9p-version:9p2000.L mount-gid:docker mount-ip: mount-msize:262144 mount-options:[] mount-port:0 mount-string:C:\Users\jenkins.minikube3:/minikube-host mount-type:9p mount-uid:docker namespace:default nat-nic-type:virtio native-ssh:true network: network-plugin: nfs-share:[] nfs-shares-root:/nf
sshares no-kubernetes:false no-vtx-check:false nodes:1 output:text ports:[] preload:true profile:pause-061700 purge:false qemu-firmware-path: registry-mirror:[] reminderwaitperiodinhours:24 rootless:false schedule:0s service-cluster-ip-range:10.96.0.0/12 skip-audit:false socket-vmnet-client-path: socket-vmnet-path: ssh-ip-address: ssh-key: ssh-port:22 ssh-user:root static-ip: subnet: trace: user: uuid: vm:false vm-driver: wait:[apiserver system_pods] wait-timeout:6m0s wantnonedriverwarning:true wantupdatenotification:true wantvirtualboxdriverwarning:true]
	I0407 15:06:20.003729    4720 unpause.go:65] node: {Name: IP:172.17.90.208 Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0407 15:06:20.007291    4720 out.go:177] * Unpausing node pause-061700 ... 
	I0407 15:06:20.011488    4720 host.go:66] Checking if "pause-061700" exists ...
	I0407 15:06:20.023818    4720 ssh_runner.go:195] Run: systemctl --version
	I0407 15:06:20.023818    4720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM pause-061700 ).state
	I0407 15:06:22.380675    4720 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:22.380737    4720 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:22.380737    4720 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM pause-061700 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:25.117937    4720 main.go:141] libmachine: [stdout =====>] : 172.17.90.208
	
	I0407 15:06:25.118307    4720 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:25.118729    4720 sshutil.go:53] new ssh client: &{IP:172.17.90.208 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\pause-061700\id_rsa Username:docker}
	I0407 15:06:25.212990    4720 ssh_runner.go:235] Completed: systemctl --version: (5.1891291s)
	I0407 15:06:25.225892    4720 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system|kubernetes-dashboard|storage-gluster|istio-operator)_ --format={{.ID}}
	I0407 15:06:25.257700    4720 docker.go:517] Unpausing containers: [a3be8600406a 68af283bf179 b2da6ba15cff b0dc1b9871e4 4d467950a871 d8e8e20f35ef 4a599a9e7455 e6f7e3841233 95eb95f90e9f 1789d114861e 4db0715c48d4 41027da2fae1]
	I0407 15:06:25.268352    4720 ssh_runner.go:195] Run: docker unpause a3be8600406a 68af283bf179 b2da6ba15cff b0dc1b9871e4 4d467950a871 d8e8e20f35ef 4a599a9e7455 e6f7e3841233 95eb95f90e9f 1789d114861e 4db0715c48d4 41027da2fae1
	I0407 15:06:25.505627    4720 ssh_runner.go:195] Run: sudo systemctl daemon-reload

                                                
                                                
** /stderr **
pause_test.go:123: failed to unpause minikube with args: "out/minikube-windows-amd64.exe unpause -p pause-061700 --alsologtostderr -v=5" : exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-061700 -n pause-061700
E0407 15:06:38.888304    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-061700 -n pause-061700: exit status 2 (13.2288458s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/Unpause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Unpause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-061700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-061700 logs -n 25: (9.1803129s)
helpers_test.go:252: TestPause/serial/Unpause logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl status containerd            |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl cat containerd               |                           |                   |         |                     |                     |
	|         | --no-pager                             |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo cat              | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo cat              | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | containerd config dump                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl status crio --all            |                           |                   |         |                     |                     |
	|         | --full --no-pager                      |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo find             | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo crio             | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | config                                 |                           |                   |         |                     |                     |
	| delete  | -p cilium-004500                       | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC | 07 Apr 25 14:51 UTC |
	| start   | -p pause-061700 --memory=2048          | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC | 07 Apr 25 15:00 UTC |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv             |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-817400              | running-upgrade-817400    | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:52 UTC | 07 Apr 25 15:01 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:55 UTC | 07 Apr 25 14:56 UTC |
	| start   | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:56 UTC | 07 Apr 25 15:03 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-523500 stop            | minikube                  | minikube3\jenkins | v1.26.0 | 07 Apr 25 14:57 GMT | 07 Apr 25 14:58 GMT |
	| start   | -p stopped-upgrade-523500              | stopped-upgrade-523500    | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:58 UTC | 07 Apr 25 15:04 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:00 UTC | 07 Apr 25 15:05 UTC |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-817400              | running-upgrade-817400    | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:01 UTC | 07 Apr 25 15:02 UTC |
	| start   | -p cert-expiration-287100              | cert-expiration-287100    | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:02 UTC |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:03 UTC |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:03 UTC |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-523500              | stopped-upgrade-523500    | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:05 UTC | 07 Apr 25 15:05 UTC |
	| start   | -p docker-flags-422800                 | docker-flags-422800       | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:05 UTC |                     |
	|         | --cache-images=false                   |                           |                   |         |                     |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=false                           |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                   |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                   |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                     |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| pause   | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:05 UTC | 07 Apr 25 15:06 UTC |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	| unpause | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:06 UTC |                     |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 15:05:54
	Running on machine: minikube3
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 15:05:54.623426   13960 out.go:345] Setting OutFile to fd 836 ...
	I0407 15:05:54.698757   13960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 15:05:54.698757   13960 out.go:358] Setting ErrFile to fd 1668...
	I0407 15:05:54.698757   13960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 15:05:54.718750   13960 out.go:352] Setting JSON to false
	I0407 15:05:54.723461   13960 start.go:129] hostinfo: {"hostname":"minikube3","uptime":10147,"bootTime":1744028207,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 15:05:54.723602   13960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 15:05:54.728886   13960 out.go:177] * [docker-flags-422800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 15:05:54.733541   13960 notify.go:220] Checking for updates...
	I0407 15:05:54.739943   13960 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 15:05:54.742832   13960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 15:05:54.745641   13960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 15:05:54.750633   13960 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 15:05:54.755312   13960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 15:05:51.635526    8528 addons.go:514] duration metric: took 102.8314ms for enable addons: enabled=[]
	I0407 15:05:51.643228    8528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 15:05:51.931516    8528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 15:05:51.967668    8528 node_ready.go:35] waiting up to 6m0s for node "pause-061700" to be "Ready" ...
	I0407 15:05:51.972638    8528 node_ready.go:49] node "pause-061700" has status "Ready":"True"
	I0407 15:05:51.972638    8528 node_ready.go:38] duration metric: took 4.9692ms for node "pause-061700" to be "Ready" ...
	I0407 15:05:51.972638    8528 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 15:05:51.980343    8528 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w69np" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:51.989549    8528 pod_ready.go:93] pod "coredns-668d6bf9bc-w69np" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:51.989549    8528 pod_ready.go:82] duration metric: took 8.4555ms for pod "coredns-668d6bf9bc-w69np" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:51.989549    8528 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:52.343688    8528 pod_ready.go:93] pod "etcd-pause-061700" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:52.343688    8528 pod_ready.go:82] duration metric: took 354.1361ms for pod "etcd-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:52.343688    8528 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:52.744386    8528 pod_ready.go:93] pod "kube-apiserver-pause-061700" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:52.744386    8528 pod_ready.go:82] duration metric: took 400.6948ms for pod "kube-apiserver-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:52.744386    8528 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.145985    8528 pod_ready.go:93] pod "kube-controller-manager-pause-061700" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:53.145985    8528 pod_ready.go:82] duration metric: took 401.5954ms for pod "kube-controller-manager-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.145985    8528 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7w9vv" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.542895    8528 pod_ready.go:93] pod "kube-proxy-7w9vv" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:53.543029    8528 pod_ready.go:82] duration metric: took 397.0406ms for pod "kube-proxy-7w9vv" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.543091    8528 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.944607    8528 pod_ready.go:93] pod "kube-scheduler-pause-061700" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:53.944607    8528 pod_ready.go:82] duration metric: took 401.5127ms for pod "kube-scheduler-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.944607    8528 pod_ready.go:39] duration metric: took 1.9719534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 15:05:53.944741    8528 api_server.go:52] waiting for apiserver process to appear ...
	I0407 15:05:53.957826    8528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 15:05:53.990302    8528 api_server.go:72] duration metric: took 2.4577972s to wait for apiserver process to appear ...
	I0407 15:05:53.990302    8528 api_server.go:88] waiting for apiserver healthz status ...
	I0407 15:05:53.990421    8528 api_server.go:253] Checking apiserver healthz at https://172.17.90.208:8443/healthz ...
	I0407 15:05:54.001961    8528 api_server.go:279] https://172.17.90.208:8443/healthz returned 200:
	ok
	I0407 15:05:54.004531    8528 api_server.go:141] control plane version: v1.32.2
	I0407 15:05:54.004531    8528 api_server.go:131] duration metric: took 14.2284ms to wait for apiserver health ...
	I0407 15:05:54.004531    8528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 15:05:54.145827    8528 system_pods.go:59] 6 kube-system pods found
	I0407 15:05:54.145896    8528 system_pods.go:61] "coredns-668d6bf9bc-w69np" [7873c756-d86b-4882-95b3-86489046d8c2] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "etcd-pause-061700" [fe6837b6-75b3-4204-bb4d-f6ba934abae0] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "kube-apiserver-pause-061700" [c67ed7cc-8d0b-4e40-854b-64a555c57a12] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "kube-controller-manager-pause-061700" [2d47120b-7bed-4a35-b577-f502b3ebb3dc] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "kube-proxy-7w9vv" [8e6f0aec-e738-4538-bf90-80b15e69c731] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "kube-scheduler-pause-061700" [c99eee47-cc17-4da3-8ebe-c79ec253b06a] Running
	I0407 15:05:54.145896    8528 system_pods.go:74] duration metric: took 141.3641ms to wait for pod list to return data ...
	I0407 15:05:54.145996    8528 default_sa.go:34] waiting for default service account to be created ...
	I0407 15:05:54.344668    8528 default_sa.go:45] found service account: "default"
	I0407 15:05:54.344668    8528 default_sa.go:55] duration metric: took 198.6701ms for default service account to be created ...
	I0407 15:05:54.344668    8528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 15:05:54.545598    8528 system_pods.go:86] 6 kube-system pods found
	I0407 15:05:54.545598    8528 system_pods.go:89] "coredns-668d6bf9bc-w69np" [7873c756-d86b-4882-95b3-86489046d8c2] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "etcd-pause-061700" [fe6837b6-75b3-4204-bb4d-f6ba934abae0] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "kube-apiserver-pause-061700" [c67ed7cc-8d0b-4e40-854b-64a555c57a12] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "kube-controller-manager-pause-061700" [2d47120b-7bed-4a35-b577-f502b3ebb3dc] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "kube-proxy-7w9vv" [8e6f0aec-e738-4538-bf90-80b15e69c731] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "kube-scheduler-pause-061700" [c99eee47-cc17-4da3-8ebe-c79ec253b06a] Running
	I0407 15:05:54.545598    8528 system_pods.go:126] duration metric: took 200.9284ms to wait for k8s-apps to be running ...
	I0407 15:05:54.545598    8528 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 15:05:54.557585    8528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 15:05:54.585583    8528 system_svc.go:56] duration metric: took 39.9842ms WaitForService to wait for kubelet
	I0407 15:05:54.585583    8528 kubeadm.go:582] duration metric: took 3.0531117s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 15:05:54.585583    8528 node_conditions.go:102] verifying NodePressure condition ...
	I0407 15:05:54.745049    8528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 15:05:54.745144    8528 node_conditions.go:123] node cpu capacity is 2
	I0407 15:05:54.745144    8528 node_conditions.go:105] duration metric: took 159.5598ms to run NodePressure ...
	I0407 15:05:54.745144    8528 start.go:241] waiting for startup goroutines ...
	I0407 15:05:54.745259    8528 start.go:246] waiting for cluster config update ...
	I0407 15:05:54.745365    8528 start.go:255] writing updated cluster config ...
	I0407 15:05:54.758275    8528 ssh_runner.go:195] Run: rm -f paused
	I0407 15:05:54.935508    8528 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 15:05:54.939934    8528 out.go:177] * Done! kubectl is now configured to use "pause-061700" cluster and "default" namespace by default
	I0407 15:05:51.949590   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:05:54.340720   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:05:54.340720   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:05:54.340813   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:05:54.758275   13960 config.go:182] Loaded profile config "cert-expiration-287100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:05:54.759274   13960 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:05:54.759274   13960 config.go:182] Loaded profile config "kubernetes-upgrade-003200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:05:54.760273   13960 config.go:182] Loaded profile config "pause-061700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:05:54.760273   13960 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 15:06:01.099368   13960 out.go:177] * Using the hyperv driver based on user configuration
	I0407 15:06:01.104377   13960 start.go:297] selected driver: hyperv
	I0407 15:06:01.104377   13960 start.go:901] validating driver "hyperv" against <nil>
	I0407 15:06:01.104377   13960 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 15:06:01.164353   13960 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 15:06:01.166349   13960 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0407 15:06:01.166349   13960 cni.go:84] Creating CNI manager for ""
	I0407 15:06:01.166349   13960 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 15:06:01.166649   13960 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 15:06:01.166833   13960 start.go:340] cluster config:
	{Name:docker-flags-422800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:docker-flags-422800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 15:06:01.166833   13960 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 15:06:01.171167   13960 out.go:177] * Starting "docker-flags-422800" primary control-plane node in "docker-flags-422800" cluster
	I0407 15:05:57.370048   13864 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:05:57.371049   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:05:58.371573   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:00.944426   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:00.944426   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:00.944517   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:01.172178   13960 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 15:06:01.172178   13960 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 15:06:01.172178   13960 cache.go:56] Caching tarball of preloaded images
	I0407 15:06:01.172178   13960 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 15:06:01.172178   13960 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 15:06:01.172178   13960 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\docker-flags-422800\config.json ...
	I0407 15:06:01.172178   13960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\docker-flags-422800\config.json: {Name:mk59681225c11f992fb98ca0cdd731c121f621cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 15:06:01.176180   13960 start.go:360] acquireMachinesLock for docker-flags-422800: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 15:06:04.305418   13864 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:06:04.305418   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:05.305949   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:07.730399   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:07.730504   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:07.730595   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:10.470986   13864 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:06:10.470986   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:11.471155   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:13.903946   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:13.903946   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:13.904433   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:16.718088   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:16.718088   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:16.718723   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:19.084639   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:19.084639   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:19.085342   13864 machine.go:93] provisionDockerMachine start ...
	I0407 15:06:19.085427   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:21.434317   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:21.434317   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:21.434534   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:24.184019   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:24.184095   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:24.192523   13864 main.go:141] libmachine: Using SSH client type: native
	I0407 15:06:24.209280   13864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.86.101 22 <nil> <nil>}
	I0407 15:06:24.209280   13864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 15:06:24.341491   13864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 15:06:24.341491   13864 buildroot.go:166] provisioning hostname "cert-expiration-287100"
	I0407 15:06:24.341491   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:26.734433   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:26.734562   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:26.734647   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:29.511755   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:29.512114   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:29.517749   13864 main.go:141] libmachine: Using SSH client type: native
	I0407 15:06:29.518501   13864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.86.101 22 <nil> <nil>}
	I0407 15:06:29.518501   13864 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-287100 && echo "cert-expiration-287100" | sudo tee /etc/hostname
	I0407 15:06:29.697637   13864 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-287100
	
	I0407 15:06:29.697637   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:32.062720   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:32.062720   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:32.062720   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:34.878836   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:34.878836   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:34.885240   13864 main.go:141] libmachine: Using SSH client type: native
	I0407 15:06:34.885874   13864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.86.101 22 <nil> <nil>}
	I0407 15:06:34.885874   13864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-287100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-287100/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-287100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 15:06:35.044095   13864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 15:06:35.044095   13864 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 15:06:35.044095   13864 buildroot.go:174] setting up certificates
	I0407 15:06:35.044095   13864 provision.go:84] configureAuth start
	I0407 15:06:35.044095   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	
	
	==> Docker <==
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.538317274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.538767785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.557183234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.557344338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.557440340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.557642045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:37 pause-061700 cri-dockerd[5091]: time="2025-04-07T15:05:37Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.032341912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.032566116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.032646618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.032875122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:39 pause-061700 cri-dockerd[5091]: time="2025-04-07T15:05:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b0dc1b9871e4be7d1fea10a590624b0a3b83ff430ace58432bdf928649f49458/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.356842478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.356946780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.356969680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.357594591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:39 pause-061700 cri-dockerd[5091]: time="2025-04-07T15:05:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2da6ba15cffb604973e41cc2e49c457176c00cead7444c66c547ff7ad4650ff/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 15:05:40 pause-061700 dockerd[4785]: time="2025-04-07T15:05:40.956221372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:40 pause-061700 dockerd[4785]: time="2025-04-07T15:05:40.958618283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:40 pause-061700 dockerd[4785]: time="2025-04-07T15:05:40.958691083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:40 pause-061700 dockerd[4785]: time="2025-04-07T15:05:40.958927084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:42 pause-061700 dockerd[4785]: time="2025-04-07T15:05:42.017642857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:42 pause-061700 dockerd[4785]: time="2025-04-07T15:05:42.017866058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:42 pause-061700 dockerd[4785]: time="2025-04-07T15:05:42.017884559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:42 pause-061700 dockerd[4785]: time="2025-04-07T15:05:42.018002059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a3be8600406a2       c69fa2e9cbf5f       About a minute ago   Running             coredns                   1                   b2da6ba15cffb       coredns-668d6bf9bc-w69np
	68af283bf1797       f1332858868e1       About a minute ago   Running             kube-proxy                2                   b0dc1b9871e4b       kube-proxy-7w9vv
	4d467950a8715       b6a454c5a800d       About a minute ago   Running             kube-controller-manager   2                   4db0715c48d4d       kube-controller-manager-pause-061700
	d8e8e20f35ef8       a9e7e6b294baf       About a minute ago   Running             etcd                      2                   1789d114861e4       etcd-pause-061700
	4a599a9e74558       d8e673e7c9983       About a minute ago   Running             kube-scheduler            2                   95eb95f90e9f7       kube-scheduler-pause-061700
	e6f7e38412336       85b7a174738ba       About a minute ago   Running             kube-apiserver            2                   41027da2fae1e       kube-apiserver-pause-061700
	21a40c3fbaf48       f1332858868e1       About a minute ago   Exited              kube-proxy                1                   2f5f241cfe062       kube-proxy-7w9vv
	f6cdc91b933ec       b6a454c5a800d       About a minute ago   Exited              kube-controller-manager   1                   7cd55e7b32a2a       kube-controller-manager-pause-061700
	16d911c61616e       85b7a174738ba       About a minute ago   Exited              kube-apiserver            1                   50b2b0583a852       kube-apiserver-pause-061700
	8f484a9c7afa9       d8e673e7c9983       About a minute ago   Exited              kube-scheduler            1                   748eb02a6915a       kube-scheduler-pause-061700
	30f491f7e032a       a9e7e6b294baf       About a minute ago   Exited              etcd                      1                   992fe116fc3b4       etcd-pause-061700
	e5aa9de944f78       c69fa2e9cbf5f       6 minutes ago        Exited              coredns                   0                   62239029ddd72       coredns-668d6bf9bc-w69np
	
	
	==> coredns [a3be8600406a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 52f38634f47d27a60a843ea08b564c25eb754b24bbf06ec66f8366b52e126543ce16cee7cc062958162af0c89604123ac00e3f032b67ea2f0f7eb90c30818844
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41269 - 62300 "HINFO IN 8540969459794854384.3787439977912440607. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065369288s
	
	
	==> coredns [e5aa9de944f7] <==
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1464102347]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 15:00:00.730) (total time: 30009ms):
	Trace[1464102347]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30009ms (15:00:30.739)
	Trace[1464102347]: [30.009559892s] [30.009559892s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1046502233]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 15:00:00.739) (total time: 30000ms):
	Trace[1046502233]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (15:00:30.740)
	Trace[1046502233]: [30.000505169s] [30.000505169s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[581429437]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 15:00:00.739) (total time: 30001ms):
	Trace[581429437]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (15:00:30.741)
	Trace[581429437]: [30.001574377s] [30.001574377s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 52f38634f47d27a60a843ea08b564c25eb754b24bbf06ec66f8366b52e126543ce16cee7cc062958162af0c89604123ac00e3f032b67ea2f0f7eb90c30818844
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-061700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-061700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=pause-061700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T14_59_53_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 14:59:49 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-061700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 15:05:58 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 07 Apr 2025 15:05:37 +0000   Mon, 07 Apr 2025 14:59:48 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 07 Apr 2025 15:05:37 +0000   Mon, 07 Apr 2025 14:59:48 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 07 Apr 2025 15:05:37 +0000   Mon, 07 Apr 2025 14:59:48 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 07 Apr 2025 15:05:37 +0000   Mon, 07 Apr 2025 14:59:56 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  172.17.90.208
	  Hostname:    pause-061700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015784Ki
	  pods:               110
	System Info:
	  Machine ID:                 556ee7ba75714a5580fe1ade3ed31630
	  System UUID:                4fe662fb-5d13-ba4d-903a-6f842265b852
	  Boot ID:                    303bb202-ae42-46ff-ac14-f3640dc35a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-w69np                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     6m49s
	  kube-system                 etcd-pause-061700                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         6m54s
	  kube-system                 kube-apiserver-pause-061700             250m (12%)    0 (0%)      0 (0%)           0 (0%)         6m55s
	  kube-system                 kube-controller-manager-pause-061700    200m (10%)    0 (0%)      0 (0%)           0 (0%)         6m54s
	  kube-system                 kube-proxy-7w9vv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         6m49s
	  kube-system                 kube-scheduler-pause-061700             100m (5%)     0 (0%)      0 (0%)           0 (0%)         6m54s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 65s                  kube-proxy       
	  Normal  Starting                 6m46s                kube-proxy       
	  Normal  NodeAllocatableEnforced  7m3s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m2s (x7 over 7m3s)  kubelet          Node pause-061700 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m2s (x8 over 7m3s)  kubelet          Node pause-061700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m2s (x8 over 7m3s)  kubelet          Node pause-061700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m54s                kubelet          Node pause-061700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m54s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  6m54s                kubelet          Node pause-061700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m54s                kubelet          Node pause-061700 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m54s                kubelet          Starting kubelet.
	  Normal  NodeReady                6m51s                kubelet          Node pause-061700 status is now: NodeReady
	  Normal  RegisteredNode           6m50s                node-controller  Node pause-061700 event: Registered Node pause-061700 in Controller
	  Normal  Starting                 75s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  75s (x8 over 75s)    kubelet          Node pause-061700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    75s (x8 over 75s)    kubelet          Node pause-061700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     75s (x7 over 75s)    kubelet          Node pause-061700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  75s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           62s                  node-controller  Node pause-061700 event: Registered Node pause-061700 in Controller
	
	
	==> dmesg <==
	[  +0.121569] kauditd_printk_skb: 74 callbacks suppressed
	[  +8.599546] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.173762] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.671884] systemd-fstab-generator[2400]: Ignoring "noauto" option for root device
	[  +0.193790] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 7 15:00] kauditd_printk_skb: 89 callbacks suppressed
	[Apr 7 15:05] systemd-fstab-generator[4357]: Ignoring "noauto" option for root device
	[  +0.936153] systemd-fstab-generator[4393]: Ignoring "noauto" option for root device
	[  +0.302541] systemd-fstab-generator[4405]: Ignoring "noauto" option for root device
	[  +0.325424] systemd-fstab-generator[4419]: Ignoring "noauto" option for root device
	[  +5.422289] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.216000] systemd-fstab-generator[5040]: Ignoring "noauto" option for root device
	[  +0.236085] systemd-fstab-generator[5052]: Ignoring "noauto" option for root device
	[  +0.250345] systemd-fstab-generator[5065]: Ignoring "noauto" option for root device
	[  +0.362540] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[  +1.110603] systemd-fstab-generator[5252]: Ignoring "noauto" option for root device
	[  +2.529955] kauditd_printk_skb: 187 callbacks suppressed
	[  +3.610955] systemd-fstab-generator[6289]: Ignoring "noauto" option for root device
	[  +1.390113] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.276176] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.797903] systemd-fstab-generator[6990]: Ignoring "noauto" option for root device
	[Apr 7 15:06] systemd-fstab-generator[7055]: Ignoring "noauto" option for root device
	[  +0.168308] kauditd_printk_skb: 12 callbacks suppressed
	[ +21.777193] systemd-fstab-generator[7367]: Ignoring "noauto" option for root device
	[  +0.209845] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [30f491f7e032] <==
	{"level":"info","ts":"2025-04-07T15:05:27.781830Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-04-07T15:05:27.814238Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"42fa8dc9b0ce1a09","local-member-id":"2560496dab777425","commit-index":592}
	{"level":"info","ts":"2025-04-07T15:05:27.814704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2560496dab777425 switched to configuration voters=()"}
	{"level":"info","ts":"2025-04-07T15:05:27.814766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2560496dab777425 became follower at term 2"}
	{"level":"info","ts":"2025-04-07T15:05:27.814804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2560496dab777425 [peers: [], term: 2, commit: 592, applied: 0, lastindex: 592, lastterm: 2]"}
	{"level":"warn","ts":"2025-04-07T15:05:27.854531Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-04-07T15:05:27.869031Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":517}
	{"level":"info","ts":"2025-04-07T15:05:27.883598Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-04-07T15:05:27.893845Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2560496dab777425","timeout":"7s"}
	{"level":"info","ts":"2025-04-07T15:05:27.894840Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2560496dab777425"}
	{"level":"info","ts":"2025-04-07T15:05:27.894937Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"2560496dab777425","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-07T15:05:27.895085Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T15:05:27.895561Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T15:05:27.895634Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T15:05:27.895878Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-07T15:05:27.896650Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2560496dab777425 switched to configuration voters=(2693233312544551973)"}
	{"level":"info","ts":"2025-04-07T15:05:27.896781Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"42fa8dc9b0ce1a09","local-member-id":"2560496dab777425","added-peer-id":"2560496dab777425","added-peer-peer-urls":["https://172.17.90.208:2380"]}
	{"level":"info","ts":"2025-04-07T15:05:27.899923Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"42fa8dc9b0ce1a09","local-member-id":"2560496dab777425","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T15:05:27.900021Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T15:05:27.906319Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T15:05:27.912671Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T15:05:27.913022Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"2560496dab777425","initial-advertise-peer-urls":["https://172.17.90.208:2380"],"listen-peer-urls":["https://172.17.90.208:2380"],"advertise-client-urls":["https://172.17.90.208:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.90.208:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T15:05:27.913067Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T15:05:27.913315Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.17.90.208:2380"}
	{"level":"info","ts":"2025-04-07T15:05:27.913332Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.17.90.208:2380"}
	
	
	==> etcd [d8e8e20f35ef] <==
	{"level":"warn","ts":"2025-04-07T15:05:45.876797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.284894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" limit:1 ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2025-04-07T15:05:45.876870Z","caller":"traceutil/trace.go:171","msg":"trace[2038297005] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:584; }","duration":"272.420795ms","start":"2025-04-07T15:05:45.604420Z","end":"2025-04-07T15:05:45.876840Z","steps":["trace[2038297005] 'agreement among raft nodes before linearized reading'  (duration: 272.148194ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:45.933543Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.545998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2025-04-07T15:05:45.933631Z","caller":"traceutil/trace.go:171","msg":"trace[1319495310] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:586; }","duration":"318.665198ms","start":"2025-04-07T15:05:45.614950Z","end":"2025-04-07T15:05:45.933615Z","steps":["trace[1319495310] 'agreement among raft nodes before linearized reading'  (duration: 318.511097ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:45.934191Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T15:05:45.614937Z","time spent":"319.1648ms","remote":"127.0.0.1:39986","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":260,"request content":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 "}
	{"level":"info","ts":"2025-04-07T15:05:45.934502Z","caller":"traceutil/trace.go:171","msg":"trace[1947744408] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"318.798398ms","start":"2025-04-07T15:05:45.615599Z","end":"2025-04-07T15:05:45.934398Z","steps":["trace[1947744408] 'process raft request'  (duration: 256.310924ms)","trace[1947744408] 'compare'  (duration: 61.296869ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T15:05:45.934809Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T15:05:45.615585Z","time spent":"319.1713ms","remote":"127.0.0.1:40052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-4789d\" mod_revision:428 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-4789d\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-4789d\" > >"}
	{"level":"info","ts":"2025-04-07T15:05:45.934891Z","caller":"traceutil/trace.go:171","msg":"trace[786704650] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"314.847482ms","start":"2025-04-07T15:05:45.620032Z","end":"2025-04-07T15:05:45.934879Z","steps":["trace[786704650] 'process raft request'  (duration: 313.367675ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:45.935470Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T15:05:45.619964Z","time spent":"315.268883ms","remote":"127.0.0.1:40212","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4124,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:577 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4075 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-04-07T15:05:45.935874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.5985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-061700\" limit:1 ","response":"range_response_count:1 size:5884"}
	{"level":"info","ts":"2025-04-07T15:05:45.935932Z","caller":"traceutil/trace.go:171","msg":"trace[965279645] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-061700; range_end:; response_count:1; response_revision:586; }","duration":"273.681901ms","start":"2025-04-07T15:05:45.662238Z","end":"2025-04-07T15:05:45.935920Z","steps":["trace[965279645] 'agreement among raft nodes before linearized reading'  (duration: 273.572901ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T15:05:46.004305Z","caller":"traceutil/trace.go:171","msg":"trace[1289835501] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"119.557724ms","start":"2025-04-07T15:05:45.884728Z","end":"2025-04-07T15:05:46.004286Z","steps":["trace[1289835501] 'process raft request'  (duration: 102.58475ms)","trace[1289835501] 'compare'  (duration: 16.06927ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T15:05:46.337876Z","caller":"traceutil/trace.go:171","msg":"trace[1697178065] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"290.998574ms","start":"2025-04-07T15:05:46.046856Z","end":"2025-04-07T15:05:46.337855Z","steps":["trace[1697178065] 'process raft request'  (duration: 261.044343ms)","trace[1697178065] 'compare'  (duration: 29.536429ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T15:05:46.387919Z","caller":"traceutil/trace.go:171","msg":"trace[1760469387] linearizableReadLoop","detail":"{readStateIndex:685; appliedIndex:683; }","duration":"219.14326ms","start":"2025-04-07T15:05:46.168747Z","end":"2025-04-07T15:05:46.387890Z","steps":["trace[1760469387] 'read index received'  (duration: 139.168909ms)","trace[1760469387] 'applied index is now lower than readState.Index'  (duration: 79.972751ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T15:05:46.388269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.440961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-061700\" limit:1 ","response":"range_response_count:1 size:5884"}
	{"level":"info","ts":"2025-04-07T15:05:46.388324Z","caller":"traceutil/trace.go:171","msg":"trace[718842230] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-061700; range_end:; response_count:1; response_revision:589; }","duration":"219.674262ms","start":"2025-04-07T15:05:46.168634Z","end":"2025-04-07T15:05:46.388308Z","steps":["trace[718842230] 'agreement among raft nodes before linearized reading'  (duration: 219.392361ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T15:05:46.389035Z","caller":"traceutil/trace.go:171","msg":"trace[1826731664] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"338.104481ms","start":"2025-04-07T15:05:46.050871Z","end":"2025-04-07T15:05:46.388975Z","steps":["trace[1826731664] 'process raft request'  (duration: 336.731275ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:46.389271Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T15:05:46.050859Z","time spent":"338.333882ms","remote":"127.0.0.1:40212","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4124,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:586 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4075 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-04-07T15:05:50.745197Z","caller":"traceutil/trace.go:171","msg":"trace[1492266805] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"201.579477ms","start":"2025-04-07T15:05:50.543546Z","end":"2025-04-07T15:05:50.745125Z","steps":["trace[1492266805] 'process raft request'  (duration: 200.918575ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T15:05:50.892734Z","caller":"traceutil/trace.go:171","msg":"trace[1947618953] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"129.986766ms","start":"2025-04-07T15:05:50.762729Z","end":"2025-04-07T15:05:50.892716Z","steps":["trace[1947618953] 'process raft request'  (duration: 123.714539ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:51.492283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.685938ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8369260481374726938 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.90.208\" mod_revision:569 > success:<request_put:<key:\"/registry/masterleases/172.17.90.208\" value_size:66 lease:8369260481374726935 >> failure:<request_range:<key:\"/registry/masterleases/172.17.90.208\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-04-07T15:05:51.492404Z","caller":"traceutil/trace.go:171","msg":"trace[792650830] linearizableReadLoop","detail":"{readStateIndex:692; appliedIndex:691; }","duration":"191.937835ms","start":"2025-04-07T15:05:51.300454Z","end":"2025-04-07T15:05:51.492392Z","steps":["trace[792650830] 'read index received'  (duration: 91.026696ms)","trace[792650830] 'applied index is now lower than readState.Index'  (duration: 100.910339ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T15:05:51.493653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.18684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-061700\" limit:1 ","response":"range_response_count:1 size:6855"}
	{"level":"info","ts":"2025-04-07T15:05:51.493756Z","caller":"traceutil/trace.go:171","msg":"trace[2029776273] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-061700; range_end:; response_count:1; response_revision:594; }","duration":"193.315541ms","start":"2025-04-07T15:05:51.300428Z","end":"2025-04-07T15:05:51.493743Z","steps":["trace[2029776273] 'agreement among raft nodes before linearized reading'  (duration: 192.313636ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T15:05:51.494646Z","caller":"traceutil/trace.go:171","msg":"trace[993908416] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"197.226358ms","start":"2025-04-07T15:05:51.297263Z","end":"2025-04-07T15:05:51.494490Z","steps":["trace[993908416] 'process raft request'  (duration: 94.26801ms)","trace[993908416] 'compare'  (duration: 100.548337ms)"],"step_count":2}
	
	
	==> kernel <==
	 15:06:47 up 9 min,  0 users,  load average: 1.00, 0.91, 0.43
	Linux pause-061700 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [16d911c61616] <==
	W0407 15:05:28.372280       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0407 15:05:28.373120       1 options.go:238] external host was not specified, using 172.17.90.208
	I0407 15:05:28.379534       1 server.go:143] Version: v1.32.2
	I0407 15:05:28.379589       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0407 15:05:29.101964       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 15:05:29.102776       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0407 15:05:29.150462       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0407 15:05:29.151541       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0407 15:05:29.186487       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0407 15:05:29.186597       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0407 15:05:29.186970       1 instance.go:233] Using reconciler: lease
	W0407 15:05:29.190640       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e6f7e3841233] <==
	I0407 15:05:37.561467       1 apf_controller.go:382] Running API Priority and Fairness config worker
	I0407 15:05:37.561516       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0407 15:05:37.564124       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0407 15:05:37.584340       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0407 15:05:37.604279       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0407 15:05:37.604325       1 policy_source.go:240] refreshing policies
	I0407 15:05:37.627458       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0407 15:05:37.628104       1 aggregator.go:171] initial CRD sync complete...
	I0407 15:05:37.628252       1 autoregister_controller.go:144] Starting autoregister controller
	I0407 15:05:37.628269       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0407 15:05:37.628276       1 cache.go:39] Caches are synced for autoregister controller
	I0407 15:05:37.632583       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 15:05:37.660490       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0407 15:05:37.665956       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0407 15:05:37.668403       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0407 15:05:37.692619       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0407 15:05:37.731227       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 15:05:38.386025       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0407 15:05:41.647077       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.90.208]
	I0407 15:05:41.649841       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 15:05:41.713098       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 15:05:43.250220       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 15:05:43.566418       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 15:05:44.713873       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 15:05:44.798940       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-controller-manager [4d467950a871] <==
	I0407 15:05:45.240021       1 shared_informer.go:320] Caches are synced for taint-eviction-controller
	I0407 15:05:45.240500       1 shared_informer.go:320] Caches are synced for taint
	I0407 15:05:45.240809       1 node_lifecycle_controller.go:1234] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0407 15:05:45.241110       1 node_lifecycle_controller.go:886] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="pause-061700"
	I0407 15:05:45.284291       1 node_lifecycle_controller.go:1080] "Controller detected that zone is now in new state" logger="node-lifecycle-controller" zone="" newState="Normal"
	I0407 15:05:45.284180       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0407 15:05:45.284244       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0407 15:05:45.284625       1 shared_informer.go:320] Caches are synced for PV protection
	I0407 15:05:45.334979       1 shared_informer.go:320] Caches are synced for resource quota
	I0407 15:05:45.335278       1 shared_informer.go:320] Caches are synced for deployment
	I0407 15:05:45.335947       1 shared_informer.go:320] Caches are synced for disruption
	I0407 15:05:45.336228       1 shared_informer.go:320] Caches are synced for PVC protection
	I0407 15:05:45.336507       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0407 15:05:45.336976       1 shared_informer.go:320] Caches are synced for crt configmap
	I0407 15:05:45.284643       1 shared_informer.go:320] Caches are synced for ephemeral
	I0407 15:05:45.284652       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0407 15:05:45.284665       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0407 15:05:45.284674       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0407 15:05:45.341676       1 shared_informer.go:320] Caches are synced for job
	I0407 15:05:45.341850       1 shared_informer.go:320] Caches are synced for garbage collector
	I0407 15:05:45.342100       1 shared_informer.go:320] Caches are synced for attach detach
	I0407 15:05:45.342483       1 shared_informer.go:320] Caches are synced for GC
	I0407 15:05:45.342574       1 shared_informer.go:320] Caches are synced for daemon sets
	I0407 15:05:45.438247       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0407 15:05:45.438583       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="176.101µs"
	
	
	==> kube-controller-manager [f6cdc91b933e] <==
	
	
	==> kube-proxy [21a40c3fbaf4] <==
	
	
	==> kube-proxy [68af283bf179] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 15:05:41.321463       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 15:05:41.610893       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.17.90.208"]
	E0407 15:05:41.610987       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 15:05:41.685522       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 15:05:41.685605       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 15:05:41.685637       1 server_linux.go:170] "Using iptables Proxier"
	I0407 15:05:41.689308       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 15:05:41.689669       1 server.go:497] "Version info" version="v1.32.2"
	I0407 15:05:41.689708       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 15:05:41.744819       1 config.go:199] "Starting service config controller"
	I0407 15:05:41.744888       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 15:05:41.744947       1 config.go:105] "Starting endpoint slice config controller"
	I0407 15:05:41.744973       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 15:05:41.744961       1 config.go:329] "Starting node config controller"
	I0407 15:05:41.745002       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 15:05:41.845816       1 shared_informer.go:320] Caches are synced for service config
	I0407 15:05:41.845822       1 shared_informer.go:320] Caches are synced for node config
	I0407 15:05:41.845833       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4a599a9e7455] <==
	E0407 15:05:37.545561       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0407 15:05:37.545756       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.548584       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 15:05:37.548795       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.555410       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 15:05:37.555582       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.555691       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 15:05:37.555771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.555997       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 15:05:37.556087       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.556308       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 15:05:37.556794       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 15:05:37.558123       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0407 15:05:37.558264       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.558426       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 15:05:37.558519       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.558658       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 15:05:37.558738       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.558939       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 15:05:37.559035       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.559229       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 15:05:37.559324       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.559467       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 15:05:37.559556       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0407 15:05:39.135452       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8f484a9c7afa] <==
	I0407 15:05:29.261338       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Apr 07 15:05:36 pause-061700 kubelet[6296]: E0407 15:05:36.041692    6296 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061700\" not found" node="pause-061700"
	Apr 07 15:05:36 pause-061700 kubelet[6296]: E0407 15:05:36.042399    6296 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061700\" not found" node="pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.032752    6296 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061700\" not found" node="pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.457070    6296 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.631366    6296 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-061700\" already exists" pod="kube-system/etcd-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.631425    6296 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.649613    6296 apiserver.go:52] "Watching apiserver"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.656935    6296 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.679743    6296 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-061700\" already exists" pod="kube-system/kube-apiserver-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.679932    6296 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.718360    6296 kubelet_node_status.go:125] "Node was previously registered" node="pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.718489    6296 kubelet_node_status.go:79] "Successfully registered node" node="pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.718527    6296 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.722552    6296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e6f0aec-e738-4538-bf90-80b15e69c731-lib-modules\") pod \"kube-proxy-7w9vv\" (UID: \"8e6f0aec-e738-4538-bf90-80b15e69c731\") " pod="kube-system/kube-proxy-7w9vv"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.722751    6296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e6f0aec-e738-4538-bf90-80b15e69c731-xtables-lock\") pod \"kube-proxy-7w9vv\" (UID: \"8e6f0aec-e738-4538-bf90-80b15e69c731\") " pod="kube-system/kube-proxy-7w9vv"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.724314    6296 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.734696    6296 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-061700\" already exists" pod="kube-system/kube-controller-manager-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.734791    6296 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.763011    6296 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-061700\" already exists" pod="kube-system/kube-scheduler-pause-061700"
	Apr 07 15:05:39 pause-061700 kubelet[6296]: I0407 15:05:39.794770    6296 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2da6ba15cffb604973e41cc2e49c457176c00cead7444c66c547ff7ad4650ff"
	Apr 07 15:05:39 pause-061700 kubelet[6296]: I0407 15:05:39.805484    6296 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0dc1b9871e4be7d1fea10a590624b0a3b83ff430ace58432bdf928649f49458"
	Apr 07 15:06:03 pause-061700 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Apr 07 15:06:03 pause-061700 systemd[1]: kubelet.service: Deactivated successfully.
	Apr 07 15:06:03 pause-061700 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 07 15:06:03 pause-061700 systemd[1]: kubelet.service: Consumed 1.743s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-061700 -n pause-061700
E0407 15:06:55.790077    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-061700 -n pause-061700: exit status 2 (13.0499157s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-061700" apiserver is not running, skipping kubectl commands (state="Paused")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-061700 -n pause-061700
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p pause-061700 -n pause-061700: exit status 2 (13.8671408s)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestPause/serial/Unpause FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestPause/serial/Unpause]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-windows-amd64.exe -p pause-061700 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-windows-amd64.exe -p pause-061700 logs -n 25: (9.5184855s)
helpers_test.go:252: TestPause/serial/Unpause logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| Command |                  Args                  |          Profile          |       User        | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl status containerd            |                           |                   |         |                     |                     |
	|         | --all --full --no-pager                |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl cat containerd               |                           |                   |         |                     |                     |
	|         | --no-pager                             |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo cat              | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | /lib/systemd/system/containerd.service |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo cat              | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | /etc/containerd/config.toml            |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | containerd config dump                 |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl status crio --all            |                           |                   |         |                     |                     |
	|         | --full --no-pager                      |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo                  | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | systemctl cat crio --no-pager          |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo find             | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | /etc/crio -type f -exec sh -c          |                           |                   |         |                     |                     |
	|         | 'echo {}; cat {}' \;                   |                           |                   |         |                     |                     |
	| ssh     | -p cilium-004500 sudo crio             | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC |                     |
	|         | config                                 |                           |                   |         |                     |                     |
	| delete  | -p cilium-004500                       | cilium-004500             | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC | 07 Apr 25 14:51 UTC |
	| start   | -p pause-061700 --memory=2048          | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:51 UTC | 07 Apr 25 15:00 UTC |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=all --driver=hyperv             |                           |                   |         |                     |                     |
	| start   | -p running-upgrade-817400              | running-upgrade-817400    | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:52 UTC | 07 Apr 25 15:01 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| stop    | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:55 UTC | 07 Apr 25 14:56 UTC |
	| start   | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:56 UTC | 07 Apr 25 15:03 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| stop    | stopped-upgrade-523500 stop            | minikube                  | minikube3\jenkins | v1.26.0 | 07 Apr 25 14:57 GMT | 07 Apr 25 14:58 GMT |
	| start   | -p stopped-upgrade-523500              | stopped-upgrade-523500    | minikube3\jenkins | v1.35.0 | 07 Apr 25 14:58 UTC | 07 Apr 25 15:04 UTC |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:00 UTC | 07 Apr 25 15:05 UTC |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p running-upgrade-817400              | running-upgrade-817400    | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:01 UTC | 07 Apr 25 15:02 UTC |
	| start   | -p cert-expiration-287100              | cert-expiration-287100    | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:02 UTC |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --cert-expiration=3m                   |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:03 UTC |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0           |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| start   | -p kubernetes-upgrade-003200           | kubernetes-upgrade-003200 | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:03 UTC |                     |
	|         | --memory=2200                          |                           |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2           |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=1                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| delete  | -p stopped-upgrade-523500              | stopped-upgrade-523500    | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:05 UTC | 07 Apr 25 15:05 UTC |
	| start   | -p docker-flags-422800                 | docker-flags-422800       | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:05 UTC |                     |
	|         | --cache-images=false                   |                           |                   |         |                     |                     |
	|         | --memory=2048                          |                           |                   |         |                     |                     |
	|         | --install-addons=false                 |                           |                   |         |                     |                     |
	|         | --wait=false                           |                           |                   |         |                     |                     |
	|         | --docker-env=FOO=BAR                   |                           |                   |         |                     |                     |
	|         | --docker-env=BAZ=BAT                   |                           |                   |         |                     |                     |
	|         | --docker-opt=debug                     |                           |                   |         |                     |                     |
	|         | --docker-opt=icc=true                  |                           |                   |         |                     |                     |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	|         | --driver=hyperv                        |                           |                   |         |                     |                     |
	| pause   | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:05 UTC | 07 Apr 25 15:06 UTC |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	| unpause | -p pause-061700                        | pause-061700              | minikube3\jenkins | v1.35.0 | 07 Apr 25 15:06 UTC |                     |
	|         | --alsologtostderr -v=5                 |                           |                   |         |                     |                     |
	|---------|----------------------------------------|---------------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 15:05:54
	Running on machine: minikube3
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 15:05:54.623426   13960 out.go:345] Setting OutFile to fd 836 ...
	I0407 15:05:54.698757   13960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 15:05:54.698757   13960 out.go:358] Setting ErrFile to fd 1668...
	I0407 15:05:54.698757   13960 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 15:05:54.718750   13960 out.go:352] Setting JSON to false
	I0407 15:05:54.723461   13960 start.go:129] hostinfo: {"hostname":"minikube3","uptime":10147,"bootTime":1744028207,"procs":188,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 15:05:54.723602   13960 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 15:05:54.728886   13960 out.go:177] * [docker-flags-422800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 15:05:54.733541   13960 notify.go:220] Checking for updates...
	I0407 15:05:54.739943   13960 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 15:05:54.742832   13960 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 15:05:54.745641   13960 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 15:05:54.750633   13960 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 15:05:54.755312   13960 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 15:05:51.635526    8528 addons.go:514] duration metric: took 102.8314ms for enable addons: enabled=[]
	I0407 15:05:51.643228    8528 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0407 15:05:51.931516    8528 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0407 15:05:51.967668    8528 node_ready.go:35] waiting up to 6m0s for node "pause-061700" to be "Ready" ...
	I0407 15:05:51.972638    8528 node_ready.go:49] node "pause-061700" has status "Ready":"True"
	I0407 15:05:51.972638    8528 node_ready.go:38] duration metric: took 4.9692ms for node "pause-061700" to be "Ready" ...
	I0407 15:05:51.972638    8528 pod_ready.go:36] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 15:05:51.980343    8528 pod_ready.go:79] waiting up to 6m0s for pod "coredns-668d6bf9bc-w69np" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:51.989549    8528 pod_ready.go:93] pod "coredns-668d6bf9bc-w69np" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:51.989549    8528 pod_ready.go:82] duration metric: took 8.4555ms for pod "coredns-668d6bf9bc-w69np" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:51.989549    8528 pod_ready.go:79] waiting up to 6m0s for pod "etcd-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:52.343688    8528 pod_ready.go:93] pod "etcd-pause-061700" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:52.343688    8528 pod_ready.go:82] duration metric: took 354.1361ms for pod "etcd-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:52.343688    8528 pod_ready.go:79] waiting up to 6m0s for pod "kube-apiserver-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:52.744386    8528 pod_ready.go:93] pod "kube-apiserver-pause-061700" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:52.744386    8528 pod_ready.go:82] duration metric: took 400.6948ms for pod "kube-apiserver-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:52.744386    8528 pod_ready.go:79] waiting up to 6m0s for pod "kube-controller-manager-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.145985    8528 pod_ready.go:93] pod "kube-controller-manager-pause-061700" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:53.145985    8528 pod_ready.go:82] duration metric: took 401.5954ms for pod "kube-controller-manager-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.145985    8528 pod_ready.go:79] waiting up to 6m0s for pod "kube-proxy-7w9vv" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.542895    8528 pod_ready.go:93] pod "kube-proxy-7w9vv" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:53.543029    8528 pod_ready.go:82] duration metric: took 397.0406ms for pod "kube-proxy-7w9vv" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.543091    8528 pod_ready.go:79] waiting up to 6m0s for pod "kube-scheduler-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.944607    8528 pod_ready.go:93] pod "kube-scheduler-pause-061700" in "kube-system" namespace has status "Ready":"True"
	I0407 15:05:53.944607    8528 pod_ready.go:82] duration metric: took 401.5127ms for pod "kube-scheduler-pause-061700" in "kube-system" namespace to be "Ready" ...
	I0407 15:05:53.944607    8528 pod_ready.go:39] duration metric: took 1.9719534s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0407 15:05:53.944741    8528 api_server.go:52] waiting for apiserver process to appear ...
	I0407 15:05:53.957826    8528 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 15:05:53.990302    8528 api_server.go:72] duration metric: took 2.4577972s to wait for apiserver process to appear ...
	I0407 15:05:53.990302    8528 api_server.go:88] waiting for apiserver healthz status ...
	I0407 15:05:53.990421    8528 api_server.go:253] Checking apiserver healthz at https://172.17.90.208:8443/healthz ...
	I0407 15:05:54.001961    8528 api_server.go:279] https://172.17.90.208:8443/healthz returned 200:
	ok
	I0407 15:05:54.004531    8528 api_server.go:141] control plane version: v1.32.2
	I0407 15:05:54.004531    8528 api_server.go:131] duration metric: took 14.2284ms to wait for apiserver health ...
	I0407 15:05:54.004531    8528 system_pods.go:43] waiting for kube-system pods to appear ...
	I0407 15:05:54.145827    8528 system_pods.go:59] 6 kube-system pods found
	I0407 15:05:54.145896    8528 system_pods.go:61] "coredns-668d6bf9bc-w69np" [7873c756-d86b-4882-95b3-86489046d8c2] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "etcd-pause-061700" [fe6837b6-75b3-4204-bb4d-f6ba934abae0] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "kube-apiserver-pause-061700" [c67ed7cc-8d0b-4e40-854b-64a555c57a12] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "kube-controller-manager-pause-061700" [2d47120b-7bed-4a35-b577-f502b3ebb3dc] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "kube-proxy-7w9vv" [8e6f0aec-e738-4538-bf90-80b15e69c731] Running
	I0407 15:05:54.145896    8528 system_pods.go:61] "kube-scheduler-pause-061700" [c99eee47-cc17-4da3-8ebe-c79ec253b06a] Running
	I0407 15:05:54.145896    8528 system_pods.go:74] duration metric: took 141.3641ms to wait for pod list to return data ...
	I0407 15:05:54.145996    8528 default_sa.go:34] waiting for default service account to be created ...
	I0407 15:05:54.344668    8528 default_sa.go:45] found service account: "default"
	I0407 15:05:54.344668    8528 default_sa.go:55] duration metric: took 198.6701ms for default service account to be created ...
	I0407 15:05:54.344668    8528 system_pods.go:116] waiting for k8s-apps to be running ...
	I0407 15:05:54.545598    8528 system_pods.go:86] 6 kube-system pods found
	I0407 15:05:54.545598    8528 system_pods.go:89] "coredns-668d6bf9bc-w69np" [7873c756-d86b-4882-95b3-86489046d8c2] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "etcd-pause-061700" [fe6837b6-75b3-4204-bb4d-f6ba934abae0] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "kube-apiserver-pause-061700" [c67ed7cc-8d0b-4e40-854b-64a555c57a12] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "kube-controller-manager-pause-061700" [2d47120b-7bed-4a35-b577-f502b3ebb3dc] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "kube-proxy-7w9vv" [8e6f0aec-e738-4538-bf90-80b15e69c731] Running
	I0407 15:05:54.545598    8528 system_pods.go:89] "kube-scheduler-pause-061700" [c99eee47-cc17-4da3-8ebe-c79ec253b06a] Running
	I0407 15:05:54.545598    8528 system_pods.go:126] duration metric: took 200.9284ms to wait for k8s-apps to be running ...
	I0407 15:05:54.545598    8528 system_svc.go:44] waiting for kubelet service to be running ....
	I0407 15:05:54.557585    8528 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 15:05:54.585583    8528 system_svc.go:56] duration metric: took 39.9842ms WaitForService to wait for kubelet
	I0407 15:05:54.585583    8528 kubeadm.go:582] duration metric: took 3.0531117s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0407 15:05:54.585583    8528 node_conditions.go:102] verifying NodePressure condition ...
	I0407 15:05:54.745049    8528 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0407 15:05:54.745144    8528 node_conditions.go:123] node cpu capacity is 2
	I0407 15:05:54.745144    8528 node_conditions.go:105] duration metric: took 159.5598ms to run NodePressure ...
	I0407 15:05:54.745144    8528 start.go:241] waiting for startup goroutines ...
	I0407 15:05:54.745259    8528 start.go:246] waiting for cluster config update ...
	I0407 15:05:54.745365    8528 start.go:255] writing updated cluster config ...
	I0407 15:05:54.758275    8528 ssh_runner.go:195] Run: rm -f paused
	I0407 15:05:54.935508    8528 start.go:600] kubectl: 1.32.3, cluster: 1.32.2 (minor skew: 0)
	I0407 15:05:54.939934    8528 out.go:177] * Done! kubectl is now configured to use "pause-061700" cluster and "default" namespace by default
	I0407 15:05:51.949590   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:05:54.340720   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:05:54.340720   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:05:54.340813   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:05:54.758275   13960 config.go:182] Loaded profile config "cert-expiration-287100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:05:54.759274   13960 config.go:182] Loaded profile config "ha-573100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:05:54.759274   13960 config.go:182] Loaded profile config "kubernetes-upgrade-003200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:05:54.760273   13960 config.go:182] Loaded profile config "pause-061700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:05:54.760273   13960 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 15:06:01.099368   13960 out.go:177] * Using the hyperv driver based on user configuration
	I0407 15:06:01.104377   13960 start.go:297] selected driver: hyperv
	I0407 15:06:01.104377   13960 start.go:901] validating driver "hyperv" against <nil>
	I0407 15:06:01.104377   13960 start.go:912] status for hyperv: {Installed:true Healthy:true Running:false NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0407 15:06:01.164353   13960 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 15:06:01.166349   13960 start_flags.go:942] Waiting for no components: map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false]
	I0407 15:06:01.166349   13960 cni.go:84] Creating CNI manager for ""
	I0407 15:06:01.166349   13960 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 15:06:01.166649   13960 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 15:06:01.166833   13960 start.go:340] cluster config:
	{Name:docker-flags-422800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:2048 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[FOO=BAR BAZ=BAT] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[debug icc=true] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:docker-flags-422800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[]
DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:false EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:false apps_running:false default_sa:false extra:false kubelet:false node_ready:false system_pods:false] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath:
SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 15:06:01.166833   13960 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 15:06:01.171167   13960 out.go:177] * Starting "docker-flags-422800" primary control-plane node in "docker-flags-422800" cluster
	I0407 15:05:57.370048   13864 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:05:57.371049   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:05:58.371573   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:00.944426   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:00.944426   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:00.944517   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:01.172178   13960 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 15:06:01.172178   13960 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 15:06:01.172178   13960 cache.go:56] Caching tarball of preloaded images
	I0407 15:06:01.172178   13960 preload.go:172] Found C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0407 15:06:01.172178   13960 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 15:06:01.172178   13960 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\docker-flags-422800\config.json ...
	I0407 15:06:01.172178   13960 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\docker-flags-422800\config.json: {Name:mk59681225c11f992fb98ca0cdd731c121f621cd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 15:06:01.176180   13960 start.go:360] acquireMachinesLock for docker-flags-422800: {Name:mk88ace50ad3bf72786f3a589a5328076247f3a1 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0407 15:06:04.305418   13864 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:06:04.305418   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:05.305949   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:07.730399   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:07.730504   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:07.730595   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:10.470986   13864 main.go:141] libmachine: [stdout =====>] : 
	I0407 15:06:10.470986   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:11.471155   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:13.903946   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:13.903946   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:13.904433   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:16.718088   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:16.718088   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:16.718723   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:19.084639   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:19.084639   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:19.085342   13864 machine.go:93] provisionDockerMachine start ...
	I0407 15:06:19.085427   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:21.434317   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:21.434317   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:21.434534   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:24.184019   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:24.184095   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:24.192523   13864 main.go:141] libmachine: Using SSH client type: native
	I0407 15:06:24.209280   13864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.86.101 22 <nil> <nil>}
	I0407 15:06:24.209280   13864 main.go:141] libmachine: About to run SSH command:
	hostname
	I0407 15:06:24.341491   13864 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0407 15:06:24.341491   13864 buildroot.go:166] provisioning hostname "cert-expiration-287100"
	I0407 15:06:24.341491   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:26.734433   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:26.734562   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:26.734647   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:29.511755   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:29.512114   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:29.517749   13864 main.go:141] libmachine: Using SSH client type: native
	I0407 15:06:29.518501   13864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.86.101 22 <nil> <nil>}
	I0407 15:06:29.518501   13864 main.go:141] libmachine: About to run SSH command:
	sudo hostname cert-expiration-287100 && echo "cert-expiration-287100" | sudo tee /etc/hostname
	I0407 15:06:29.697637   13864 main.go:141] libmachine: SSH cmd err, output: <nil>: cert-expiration-287100
	
	I0407 15:06:29.697637   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:32.062720   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:32.062720   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:32.062720   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:34.878836   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:34.878836   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:34.885240   13864 main.go:141] libmachine: Using SSH client type: native
	I0407 15:06:34.885874   13864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.86.101 22 <nil> <nil>}
	I0407 15:06:34.885874   13864 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\scert-expiration-287100' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 cert-expiration-287100/g' /etc/hosts;
				else 
					echo '127.0.1.1 cert-expiration-287100' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0407 15:06:35.044095   13864 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0407 15:06:35.044095   13864 buildroot.go:172] set auth options {CertDir:C:\Users\jenkins.minikube3\minikube-integration\.minikube CaCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem CaPrivateKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ServerKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem ClientKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem ServerCertSANs:[] StorePath:C:\Users\jenkins.minikube3\minikube-integration\.minikube}
	I0407 15:06:35.044095   13864 buildroot.go:174] setting up certificates
	I0407 15:06:35.044095   13864 provision.go:84] configureAuth start
	I0407 15:06:35.044095   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:37.396732   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:37.396732   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:37.396732   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:40.231161   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:40.231161   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:40.231252   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:42.548879   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:42.548879   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:42.549015   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:45.271212   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:45.271418   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:45.271418   13864 provision.go:143] copyHostCerts
	I0407 15:06:45.272050   13864 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem, removing ...
	I0407 15:06:45.272050   13864 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\ca.pem
	I0407 15:06:45.272613   13864 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/ca.pem (1078 bytes)
	I0407 15:06:45.274209   13864 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem, removing ...
	I0407 15:06:45.274209   13864 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cert.pem
	I0407 15:06:45.274731   13864 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\cert.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/cert.pem (1123 bytes)
	I0407 15:06:45.276431   13864 exec_runner.go:144] found C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem, removing ...
	I0407 15:06:45.276431   13864 exec_runner.go:203] rm: C:\Users\jenkins.minikube3\minikube-integration\.minikube\key.pem
	I0407 15:06:45.276866   13864 exec_runner.go:151] cp: C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\key.pem --> C:\Users\jenkins.minikube3\minikube-integration\.minikube/key.pem (1679 bytes)
	I0407 15:06:45.278518   13864 provision.go:117] generating server cert: C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem ca-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem private-key=C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca-key.pem org=jenkins.cert-expiration-287100 san=[127.0.0.1 172.17.86.101 cert-expiration-287100 localhost minikube]
	I0407 15:06:45.518455   13864 provision.go:177] copyRemoteCerts
	I0407 15:06:45.535359   13864 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0407 15:06:45.535359   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:47.917024   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:47.917409   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:47.917448   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:50.736725   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:50.736725   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:50.736937   13864 sshutil.go:53] new ssh client: &{IP:172.17.86.101 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\cert-expiration-287100\id_rsa Username:docker}
	I0407 15:06:50.846263   13864 ssh_runner.go:235] Completed: sudo mkdir -p /etc/docker /etc/docker /etc/docker: (5.310762s)
	I0407 15:06:50.846623   13864 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\certs\ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0407 15:06:50.900074   13864 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server.pem --> /etc/docker/server.pem (1233 bytes)
	I0407 15:06:50.953189   13864 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0407 15:06:51.019570   13864 provision.go:87] duration metric: took 15.9753436s to configureAuth
	I0407 15:06:51.019570   13864 buildroot.go:189] setting minikube options for container-runtime
	I0407 15:06:51.020201   13864 config.go:182] Loaded profile config "cert-expiration-287100": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 15:06:51.020201   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:53.361897   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:53.361897   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:53.361897   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:06:56.092916   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:06:56.093878   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:56.102516   13864 main.go:141] libmachine: Using SSH client type: native
	I0407 15:06:56.103211   13864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.86.101 22 <nil> <nil>}
	I0407 15:06:56.103211   13864 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0407 15:06:56.244056   13864 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0407 15:06:56.244056   13864 buildroot.go:70] root file system type: tmpfs
	I0407 15:06:56.244056   13864 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0407 15:06:56.244056   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:06:58.535290   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:06:58.535290   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:06:58.535290   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:07:01.289926   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:07:01.289970   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:07:01.302157   13864 main.go:141] libmachine: Using SSH client type: native
	I0407 15:07:01.302814   13864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.86.101 22 <nil> <nil>}
	I0407 15:07:01.302814   13864 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0407 15:07:01.480607   13864 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperv --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0407 15:07:01.480607   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	I0407 15:07:03.902948   13864 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 15:07:03.902948   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:07:03.903834   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM cert-expiration-287100 ).networkadapters[0]).ipaddresses[0]
	I0407 15:07:06.783710   13864 main.go:141] libmachine: [stdout =====>] : 172.17.86.101
	
	I0407 15:07:06.783710   13864 main.go:141] libmachine: [stderr =====>] : 
	I0407 15:07:06.789563   13864 main.go:141] libmachine: Using SSH client type: native
	I0407 15:07:06.789733   13864 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xfb7d00] 0xfba840 <nil>  [] 0s} 172.17.86.101 22 <nil> <nil>}
	I0407 15:07:06.789733   13864 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0407 15:07:09.669608   13864 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0407 15:07:09.669608   13864 machine.go:96] duration metric: took 50.5838512s to provisionDockerMachine
	I0407 15:07:09.669739   13864 client.go:171] duration metric: took 2m12.662591s to LocalClient.Create
	I0407 15:07:09.669739   13864 start.go:167] duration metric: took 2m12.6627864s to libmachine.API.Create "cert-expiration-287100"
	I0407 15:07:09.669739   13864 start.go:293] postStartSetup for "cert-expiration-287100" (driver="hyperv")
	I0407 15:07:09.669739   13864 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0407 15:07:09.684679   13864 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0407 15:07:09.684679   13864 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM cert-expiration-287100 ).state
	
	
	==> Docker <==
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.538317274Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.538767785Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.557183234Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.557344338Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.557440340Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:33 pause-061700 dockerd[4785]: time="2025-04-07T15:05:33.557642045Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:37 pause-061700 cri-dockerd[5091]: time="2025-04-07T15:05:37Z" level=info msg="Docker cri received runtime config &RuntimeConfig{NetworkConfig:&NetworkConfig{PodCidr:10.244.0.0/24,},}"
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.032341912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.032566116Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.032646618Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.032875122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:39 pause-061700 cri-dockerd[5091]: time="2025-04-07T15:05:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b0dc1b9871e4be7d1fea10a590624b0a3b83ff430ace58432bdf928649f49458/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.356842478Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.356946780Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.356969680Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:39 pause-061700 dockerd[4785]: time="2025-04-07T15:05:39.357594591Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:39 pause-061700 cri-dockerd[5091]: time="2025-04-07T15:05:39Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b2da6ba15cffb604973e41cc2e49c457176c00cead7444c66c547ff7ad4650ff/resolv.conf as [nameserver 172.17.80.1]"
	Apr 07 15:05:40 pause-061700 dockerd[4785]: time="2025-04-07T15:05:40.956221372Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:40 pause-061700 dockerd[4785]: time="2025-04-07T15:05:40.958618283Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:40 pause-061700 dockerd[4785]: time="2025-04-07T15:05:40.958691083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:40 pause-061700 dockerd[4785]: time="2025-04-07T15:05:40.958927084Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:42 pause-061700 dockerd[4785]: time="2025-04-07T15:05:42.017642857Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 07 15:05:42 pause-061700 dockerd[4785]: time="2025-04-07T15:05:42.017866058Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 07 15:05:42 pause-061700 dockerd[4785]: time="2025-04-07T15:05:42.017884559Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 07 15:05:42 pause-061700 dockerd[4785]: time="2025-04-07T15:05:42.018002059Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	a3be8600406a2       c69fa2e9cbf5f       About a minute ago   Running             coredns                   1                   b2da6ba15cffb       coredns-668d6bf9bc-w69np
	68af283bf1797       f1332858868e1       About a minute ago   Running             kube-proxy                2                   b0dc1b9871e4b       kube-proxy-7w9vv
	4d467950a8715       b6a454c5a800d       About a minute ago   Running             kube-controller-manager   2                   4db0715c48d4d       kube-controller-manager-pause-061700
	d8e8e20f35ef8       a9e7e6b294baf       About a minute ago   Running             etcd                      2                   1789d114861e4       etcd-pause-061700
	4a599a9e74558       d8e673e7c9983       About a minute ago   Running             kube-scheduler            2                   95eb95f90e9f7       kube-scheduler-pause-061700
	e6f7e38412336       85b7a174738ba       About a minute ago   Running             kube-apiserver            2                   41027da2fae1e       kube-apiserver-pause-061700
	21a40c3fbaf48       f1332858868e1       About a minute ago   Exited              kube-proxy                1                   2f5f241cfe062       kube-proxy-7w9vv
	f6cdc91b933ec       b6a454c5a800d       About a minute ago   Exited              kube-controller-manager   1                   7cd55e7b32a2a       kube-controller-manager-pause-061700
	16d911c61616e       85b7a174738ba       About a minute ago   Exited              kube-apiserver            1                   50b2b0583a852       kube-apiserver-pause-061700
	8f484a9c7afa9       d8e673e7c9983       About a minute ago   Exited              kube-scheduler            1                   748eb02a6915a       kube-scheduler-pause-061700
	30f491f7e032a       a9e7e6b294baf       About a minute ago   Exited              etcd                      1                   992fe116fc3b4       etcd-pause-061700
	e5aa9de944f78       c69fa2e9cbf5f       7 minutes ago        Exited              coredns                   0                   62239029ddd72       coredns-668d6bf9bc-w69np
	
	
	==> coredns [a3be8600406a] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 52f38634f47d27a60a843ea08b564c25eb754b24bbf06ec66f8366b52e126543ce16cee7cc062958162af0c89604123ac00e3f032b67ea2f0f7eb90c30818844
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] 127.0.0.1:41269 - 62300 "HINFO IN 8540969459794854384.3787439977912440607. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.065369288s
	
	
	==> coredns [e5aa9de944f7] <==
	[INFO] plugin/reload: Running configuration SHA512 = 1b226df79860026c6a52e67daa10d7f0d57ec5b023288ec00c5e05f93523c894564e15b91770d3a07ae1cfbe861d15b37d4a0027e69c546ab112970993a3b03b
	CoreDNS-1.11.3
	linux/amd64, go1.21.11, a6338e9
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1464102347]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 15:00:00.730) (total time: 30009ms):
	Trace[1464102347]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30009ms (15:00:30.739)
	Trace[1464102347]: [30.009559892s] [30.009559892s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1046502233]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 15:00:00.739) (total time: 30000ms):
	Trace[1046502233]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (15:00:30.740)
	Trace[1046502233]: [30.000505169s] [30.000505169s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[581429437]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229 (07-Apr-2025 15:00:00.739) (total time: 30001ms):
	Trace[581429437]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (15:00:30.741)
	Trace[581429437]: [30.001574377s] [30.001574377s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.29.3/tools/cache/reflector.go:229: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] Reloading
	[INFO] plugin/reload: Running configuration SHA512 = 52f38634f47d27a60a843ea08b564c25eb754b24bbf06ec66f8366b52e126543ce16cee7cc062958162af0c89604123ac00e3f032b67ea2f0f7eb90c30818844
	[INFO] Reloading complete
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               pause-061700
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=pause-061700
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=5cf7512d5a64c8581140916e82b849633d870277
	                    minikube.k8s.io/name=pause-061700
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2025_04_07T14_59_53_0700
	                    minikube.k8s.io/version=v1.35.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 07 Apr 2025 14:59:49 +0000
	Taints:             node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  pause-061700
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 07 Apr 2025 15:05:58 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 07 Apr 2025 15:05:37 +0000   Mon, 07 Apr 2025 15:06:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 07 Apr 2025 15:05:37 +0000   Mon, 07 Apr 2025 15:06:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 07 Apr 2025 15:05:37 +0000   Mon, 07 Apr 2025 15:06:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 07 Apr 2025 15:05:37 +0000   Mon, 07 Apr 2025 15:06:50 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  172.17.90.208
	  Hostname:    pause-061700
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2015784Ki
	  pods:               110
	System Info:
	  Machine ID:                 556ee7ba75714a5580fe1ade3ed31630
	  System UUID:                4fe662fb-5d13-ba4d-903a-6f842265b852
	  Boot ID:                    303bb202-ae42-46ff-ac14-f3640dc35a22
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.4.0
	  Kubelet Version:            v1.32.2
	  Kube-Proxy Version:         v1.32.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (6 in total)
	  Namespace                   Name                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-668d6bf9bc-w69np                100m (5%)     0 (0%)      70Mi (3%)        170Mi (8%)     7m26s
	  kube-system                 etcd-pause-061700                       100m (5%)     0 (0%)      100Mi (5%)       0 (0%)         7m31s
	  kube-system                 kube-apiserver-pause-061700             250m (12%)    0 (0%)      0 (0%)           0 (0%)         7m32s
	  kube-system                 kube-controller-manager-pause-061700    200m (10%)    0 (0%)      0 (0%)           0 (0%)         7m31s
	  kube-system                 kube-proxy-7w9vv                        0 (0%)        0 (0%)      0 (0%)           0 (0%)         7m26s
	  kube-system                 kube-scheduler-pause-061700             100m (5%)     0 (0%)      0 (0%)           0 (0%)         7m31s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%)  0 (0%)
	  memory             170Mi (8%)  170Mi (8%)
	  ephemeral-storage  0 (0%)      0 (0%)
	  hugepages-2Mi      0 (0%)      0 (0%)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 102s                   kube-proxy       
	  Normal  Starting                 7m23s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  7m40s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m39s (x8 over 7m40s)  kubelet          Node pause-061700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m39s (x8 over 7m40s)  kubelet          Node pause-061700 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m39s (x7 over 7m40s)  kubelet          Node pause-061700 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    7m31s                  kubelet          Node pause-061700 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m31s                  kubelet          Node pause-061700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     7m31s                  kubelet          Node pause-061700 status is now: NodeHasSufficientPID
	  Normal  NodeReady                7m28s                  kubelet          Node pause-061700 status is now: NodeReady
	  Normal  RegisteredNode           7m27s                  node-controller  Node pause-061700 event: Registered Node pause-061700 in Controller
	  Normal  NodeHasNoDiskPressure    112s (x8 over 112s)    kubelet          Node pause-061700 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 112s                   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  112s (x8 over 112s)    kubelet          Node pause-061700 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     112s (x7 over 112s)    kubelet          Node pause-061700 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  112s                   kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           99s                    node-controller  Node pause-061700 event: Registered Node pause-061700 in Controller
	  Normal  NodeNotReady             34s                    node-controller  Node pause-061700 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +8.599546] systemd-fstab-generator[2278]: Ignoring "noauto" option for root device
	[  +0.173762] kauditd_printk_skb: 62 callbacks suppressed
	[  +5.671884] systemd-fstab-generator[2400]: Ignoring "noauto" option for root device
	[  +0.193790] kauditd_printk_skb: 12 callbacks suppressed
	[Apr 7 15:00] kauditd_printk_skb: 89 callbacks suppressed
	[Apr 7 15:05] systemd-fstab-generator[4357]: Ignoring "noauto" option for root device
	[  +0.936153] systemd-fstab-generator[4393]: Ignoring "noauto" option for root device
	[  +0.302541] systemd-fstab-generator[4405]: Ignoring "noauto" option for root device
	[  +0.325424] systemd-fstab-generator[4419]: Ignoring "noauto" option for root device
	[  +5.422289] kauditd_printk_skb: 89 callbacks suppressed
	[  +8.216000] systemd-fstab-generator[5040]: Ignoring "noauto" option for root device
	[  +0.236085] systemd-fstab-generator[5052]: Ignoring "noauto" option for root device
	[  +0.250345] systemd-fstab-generator[5065]: Ignoring "noauto" option for root device
	[  +0.362540] systemd-fstab-generator[5079]: Ignoring "noauto" option for root device
	[  +1.110603] systemd-fstab-generator[5252]: Ignoring "noauto" option for root device
	[  +2.529955] kauditd_printk_skb: 187 callbacks suppressed
	[  +3.610955] systemd-fstab-generator[6289]: Ignoring "noauto" option for root device
	[  +1.390113] kauditd_printk_skb: 62 callbacks suppressed
	[ +12.276176] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.797903] systemd-fstab-generator[6990]: Ignoring "noauto" option for root device
	[Apr 7 15:06] systemd-fstab-generator[7055]: Ignoring "noauto" option for root device
	[  +0.168308] kauditd_printk_skb: 12 callbacks suppressed
	[ +21.777193] systemd-fstab-generator[7367]: Ignoring "noauto" option for root device
	[  +0.209845] kauditd_printk_skb: 12 callbacks suppressed
	[ +24.745411] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> etcd [30f491f7e032] <==
	{"level":"info","ts":"2025-04-07T15:05:27.781830Z","caller":"etcdserver/server.go:532","msg":"No snapshot found. Recovering WAL from scratch!"}
	{"level":"info","ts":"2025-04-07T15:05:27.814238Z","caller":"etcdserver/raft.go:540","msg":"restarting local member","cluster-id":"42fa8dc9b0ce1a09","local-member-id":"2560496dab777425","commit-index":592}
	{"level":"info","ts":"2025-04-07T15:05:27.814704Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2560496dab777425 switched to configuration voters=()"}
	{"level":"info","ts":"2025-04-07T15:05:27.814766Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2560496dab777425 became follower at term 2"}
	{"level":"info","ts":"2025-04-07T15:05:27.814804Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"newRaft 2560496dab777425 [peers: [], term: 2, commit: 592, applied: 0, lastindex: 592, lastterm: 2]"}
	{"level":"warn","ts":"2025-04-07T15:05:27.854531Z","caller":"auth/store.go:1241","msg":"simple token is not cryptographically signed"}
	{"level":"info","ts":"2025-04-07T15:05:27.869031Z","caller":"mvcc/kvstore.go:423","msg":"kvstore restored","current-rev":517}
	{"level":"info","ts":"2025-04-07T15:05:27.883598Z","caller":"etcdserver/quota.go:94","msg":"enabled backend quota with default value","quota-name":"v3-applier","quota-size-bytes":2147483648,"quota-size":"2.1 GB"}
	{"level":"info","ts":"2025-04-07T15:05:27.893845Z","caller":"etcdserver/corrupt.go:96","msg":"starting initial corruption check","local-member-id":"2560496dab777425","timeout":"7s"}
	{"level":"info","ts":"2025-04-07T15:05:27.894840Z","caller":"etcdserver/corrupt.go:177","msg":"initial corruption checking passed; no corruption","local-member-id":"2560496dab777425"}
	{"level":"info","ts":"2025-04-07T15:05:27.894937Z","caller":"etcdserver/server.go:873","msg":"starting etcd server","local-member-id":"2560496dab777425","local-server-version":"3.5.16","cluster-version":"to_be_decided"}
	{"level":"info","ts":"2025-04-07T15:05:27.895085Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T15:05:27.895561Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T15:05:27.895634Z","caller":"fileutil/purge.go:50","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2025-04-07T15:05:27.895878Z","caller":"etcdserver/server.go:773","msg":"starting initial election tick advance","election-ticks":10}
	{"level":"info","ts":"2025-04-07T15:05:27.896650Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"2560496dab777425 switched to configuration voters=(2693233312544551973)"}
	{"level":"info","ts":"2025-04-07T15:05:27.896781Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"42fa8dc9b0ce1a09","local-member-id":"2560496dab777425","added-peer-id":"2560496dab777425","added-peer-peer-urls":["https://172.17.90.208:2380"]}
	{"level":"info","ts":"2025-04-07T15:05:27.899923Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"42fa8dc9b0ce1a09","local-member-id":"2560496dab777425","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T15:05:27.900021Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2025-04-07T15:05:27.906319Z","caller":"v3rpc/health.go:61","msg":"grpc service status changed","service":"","status":"SERVING"}
	{"level":"info","ts":"2025-04-07T15:05:27.912671Z","caller":"embed/etcd.go:729","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2025-04-07T15:05:27.913022Z","caller":"embed/etcd.go:280","msg":"now serving peer/client/metrics","local-member-id":"2560496dab777425","initial-advertise-peer-urls":["https://172.17.90.208:2380"],"listen-peer-urls":["https://172.17.90.208:2380"],"advertise-client-urls":["https://172.17.90.208:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://172.17.90.208:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2025-04-07T15:05:27.913067Z","caller":"embed/etcd.go:871","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2025-04-07T15:05:27.913315Z","caller":"embed/etcd.go:600","msg":"serving peer traffic","address":"172.17.90.208:2380"}
	{"level":"info","ts":"2025-04-07T15:05:27.913332Z","caller":"embed/etcd.go:572","msg":"cmux::serve","address":"172.17.90.208:2380"}
	
	
	==> etcd [d8e8e20f35ef] <==
	{"level":"warn","ts":"2025-04-07T15:05:45.876797Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"272.284894ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/endpoint-controller\" limit:1 ","response":"range_response_count:1 size:203"}
	{"level":"info","ts":"2025-04-07T15:05:45.876870Z","caller":"traceutil/trace.go:171","msg":"trace[2038297005] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/endpoint-controller; range_end:; response_count:1; response_revision:584; }","duration":"272.420795ms","start":"2025-04-07T15:05:45.604420Z","end":"2025-04-07T15:05:45.876840Z","steps":["trace[2038297005] 'agreement among raft nodes before linearized reading'  (duration: 272.148194ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:45.933543Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"318.545998ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 ","response":"range_response_count:1 size:238"}
	{"level":"info","ts":"2025-04-07T15:05:45.933631Z","caller":"traceutil/trace.go:171","msg":"trace[1319495310] range","detail":"{range_begin:/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner; range_end:; response_count:1; response_revision:586; }","duration":"318.665198ms","start":"2025-04-07T15:05:45.614950Z","end":"2025-04-07T15:05:45.933615Z","steps":["trace[1319495310] 'agreement among raft nodes before linearized reading'  (duration: 318.511097ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:45.934191Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T15:05:45.614937Z","time spent":"319.1648ms","remote":"127.0.0.1:39986","response type":"/etcdserverpb.KV/Range","request count":0,"request size":78,"response count":1,"response size":260,"request content":"key:\"/registry/serviceaccounts/kube-system/legacy-service-account-token-cleaner\" limit:1 "}
	{"level":"info","ts":"2025-04-07T15:05:45.934502Z","caller":"traceutil/trace.go:171","msg":"trace[1947744408] transaction","detail":"{read_only:false; response_revision:585; number_of_response:1; }","duration":"318.798398ms","start":"2025-04-07T15:05:45.615599Z","end":"2025-04-07T15:05:45.934398Z","steps":["trace[1947744408] 'process raft request'  (duration: 256.310924ms)","trace[1947744408] 'compare'  (duration: 61.296869ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T15:05:45.934809Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T15:05:45.615585Z","time spent":"319.1713ms","remote":"127.0.0.1:40052","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1252,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/endpointslices/kube-system/kube-dns-4789d\" mod_revision:428 > success:<request_put:<key:\"/registry/endpointslices/kube-system/kube-dns-4789d\" value_size:1193 >> failure:<request_range:<key:\"/registry/endpointslices/kube-system/kube-dns-4789d\" > >"}
	{"level":"info","ts":"2025-04-07T15:05:45.934891Z","caller":"traceutil/trace.go:171","msg":"trace[786704650] transaction","detail":"{read_only:false; response_revision:586; number_of_response:1; }","duration":"314.847482ms","start":"2025-04-07T15:05:45.620032Z","end":"2025-04-07T15:05:45.934879Z","steps":["trace[786704650] 'process raft request'  (duration: 313.367675ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:45.935470Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T15:05:45.619964Z","time spent":"315.268883ms","remote":"127.0.0.1:40212","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4124,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:577 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4075 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"warn","ts":"2025-04-07T15:05:45.935874Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"273.5985ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-061700\" limit:1 ","response":"range_response_count:1 size:5884"}
	{"level":"info","ts":"2025-04-07T15:05:45.935932Z","caller":"traceutil/trace.go:171","msg":"trace[965279645] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-061700; range_end:; response_count:1; response_revision:586; }","duration":"273.681901ms","start":"2025-04-07T15:05:45.662238Z","end":"2025-04-07T15:05:45.935920Z","steps":["trace[965279645] 'agreement among raft nodes before linearized reading'  (duration: 273.572901ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T15:05:46.004305Z","caller":"traceutil/trace.go:171","msg":"trace[1289835501] transaction","detail":"{read_only:false; response_revision:587; number_of_response:1; }","duration":"119.557724ms","start":"2025-04-07T15:05:45.884728Z","end":"2025-04-07T15:05:46.004286Z","steps":["trace[1289835501] 'process raft request'  (duration: 102.58475ms)","trace[1289835501] 'compare'  (duration: 16.06927ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T15:05:46.337876Z","caller":"traceutil/trace.go:171","msg":"trace[1697178065] transaction","detail":"{read_only:false; response_revision:588; number_of_response:1; }","duration":"290.998574ms","start":"2025-04-07T15:05:46.046856Z","end":"2025-04-07T15:05:46.337855Z","steps":["trace[1697178065] 'process raft request'  (duration: 261.044343ms)","trace[1697178065] 'compare'  (duration: 29.536429ms)"],"step_count":2}
	{"level":"info","ts":"2025-04-07T15:05:46.387919Z","caller":"traceutil/trace.go:171","msg":"trace[1760469387] linearizableReadLoop","detail":"{readStateIndex:685; appliedIndex:683; }","duration":"219.14326ms","start":"2025-04-07T15:05:46.168747Z","end":"2025-04-07T15:05:46.387890Z","steps":["trace[1760469387] 'read index received'  (duration: 139.168909ms)","trace[1760469387] 'applied index is now lower than readState.Index'  (duration: 79.972751ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T15:05:46.388269Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"219.440961ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/etcd-pause-061700\" limit:1 ","response":"range_response_count:1 size:5884"}
	{"level":"info","ts":"2025-04-07T15:05:46.388324Z","caller":"traceutil/trace.go:171","msg":"trace[718842230] range","detail":"{range_begin:/registry/pods/kube-system/etcd-pause-061700; range_end:; response_count:1; response_revision:589; }","duration":"219.674262ms","start":"2025-04-07T15:05:46.168634Z","end":"2025-04-07T15:05:46.388308Z","steps":["trace[718842230] 'agreement among raft nodes before linearized reading'  (duration: 219.392361ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T15:05:46.389035Z","caller":"traceutil/trace.go:171","msg":"trace[1826731664] transaction","detail":"{read_only:false; response_revision:589; number_of_response:1; }","duration":"338.104481ms","start":"2025-04-07T15:05:46.050871Z","end":"2025-04-07T15:05:46.388975Z","steps":["trace[1826731664] 'process raft request'  (duration: 336.731275ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:46.389271Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2025-04-07T15:05:46.050859Z","time spent":"338.333882ms","remote":"127.0.0.1:40212","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":4124,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/deployments/kube-system/coredns\" mod_revision:586 > success:<request_put:<key:\"/registry/deployments/kube-system/coredns\" value_size:4075 >> failure:<request_range:<key:\"/registry/deployments/kube-system/coredns\" > >"}
	{"level":"info","ts":"2025-04-07T15:05:50.745197Z","caller":"traceutil/trace.go:171","msg":"trace[1492266805] transaction","detail":"{read_only:false; response_revision:592; number_of_response:1; }","duration":"201.579477ms","start":"2025-04-07T15:05:50.543546Z","end":"2025-04-07T15:05:50.745125Z","steps":["trace[1492266805] 'process raft request'  (duration: 200.918575ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T15:05:50.892734Z","caller":"traceutil/trace.go:171","msg":"trace[1947618953] transaction","detail":"{read_only:false; response_revision:593; number_of_response:1; }","duration":"129.986766ms","start":"2025-04-07T15:05:50.762729Z","end":"2025-04-07T15:05:50.892716Z","steps":["trace[1947618953] 'process raft request'  (duration: 123.714539ms)"],"step_count":1}
	{"level":"warn","ts":"2025-04-07T15:05:51.492283Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"100.685938ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8369260481374726938 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/masterleases/172.17.90.208\" mod_revision:569 > success:<request_put:<key:\"/registry/masterleases/172.17.90.208\" value_size:66 lease:8369260481374726935 >> failure:<request_range:<key:\"/registry/masterleases/172.17.90.208\" > >>","response":"size:16"}
	{"level":"info","ts":"2025-04-07T15:05:51.492404Z","caller":"traceutil/trace.go:171","msg":"trace[792650830] linearizableReadLoop","detail":"{readStateIndex:692; appliedIndex:691; }","duration":"191.937835ms","start":"2025-04-07T15:05:51.300454Z","end":"2025-04-07T15:05:51.492392Z","steps":["trace[792650830] 'read index received'  (duration: 91.026696ms)","trace[792650830] 'applied index is now lower than readState.Index'  (duration: 100.910339ms)"],"step_count":2}
	{"level":"warn","ts":"2025-04-07T15:05:51.493653Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"193.18684ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-pause-061700\" limit:1 ","response":"range_response_count:1 size:6855"}
	{"level":"info","ts":"2025-04-07T15:05:51.493756Z","caller":"traceutil/trace.go:171","msg":"trace[2029776273] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-pause-061700; range_end:; response_count:1; response_revision:594; }","duration":"193.315541ms","start":"2025-04-07T15:05:51.300428Z","end":"2025-04-07T15:05:51.493743Z","steps":["trace[2029776273] 'agreement among raft nodes before linearized reading'  (duration: 192.313636ms)"],"step_count":1}
	{"level":"info","ts":"2025-04-07T15:05:51.494646Z","caller":"traceutil/trace.go:171","msg":"trace[993908416] transaction","detail":"{read_only:false; response_revision:594; number_of_response:1; }","duration":"197.226358ms","start":"2025-04-07T15:05:51.297263Z","end":"2025-04-07T15:05:51.494490Z","steps":["trace[993908416] 'process raft request'  (duration: 94.26801ms)","trace[993908416] 'compare'  (duration: 100.548337ms)"],"step_count":2}
	
	
	==> kernel <==
	 15:07:24 up 9 min,  0 users,  load average: 0.63, 0.82, 0.42
	Linux pause-061700 5.10.207 #1 SMP Tue Jan 14 08:15:54 UTC 2025 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [16d911c61616] <==
	W0407 15:05:28.372280       1 registry.go:256] calling componentGlobalsRegistry.AddFlags more than once, the registry will be set by the latest flags
	I0407 15:05:28.373120       1 options.go:238] external host was not specified, using 172.17.90.208
	I0407 15:05:28.379534       1 server.go:143] Version: v1.32.2
	I0407 15:05:28.379589       1 server.go:145] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	W0407 15:05:29.101964       1 logging.go:55] [core] [Channel #1 SubChannel #2]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	W0407 15:05:29.102776       1 logging.go:55] [core] [Channel #3 SubChannel #4]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	I0407 15:05:29.150462       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0407 15:05:29.151541       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0407 15:05:29.186487       1 plugins.go:157] Loaded 13 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionPolicy,MutatingAdmissionWebhook.
	I0407 15:05:29.186597       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0407 15:05:29.186970       1 instance.go:233] Using reconciler: lease
	W0407 15:05:29.190640       1 logging.go:55] [core] [Channel #5 SubChannel #6]grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: Error while dialing: dial tcp 127.0.0.1:2379: connect: connection refused"
	
	
	==> kube-apiserver [e6f7e3841233] <==
	I0407 15:05:37.561516       1 apf_controller.go:385] Running API Priority and Fairness periodic rebalancing process
	I0407 15:05:37.564124       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0407 15:05:37.584340       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0407 15:05:37.604279       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0407 15:05:37.604325       1 policy_source.go:240] refreshing policies
	I0407 15:05:37.627458       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0407 15:05:37.628104       1 aggregator.go:171] initial CRD sync complete...
	I0407 15:05:37.628252       1 autoregister_controller.go:144] Starting autoregister controller
	I0407 15:05:37.628269       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0407 15:05:37.628276       1 cache.go:39] Caches are synced for autoregister controller
	I0407 15:05:37.632583       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0407 15:05:37.660490       1 cache.go:39] Caches are synced for RemoteAvailability controller
	I0407 15:05:37.665956       1 cache.go:39] Caches are synced for LocalAvailability controller
	I0407 15:05:37.668403       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0407 15:05:37.692619       1 handler_discovery.go:451] Starting ResourceDiscoveryManager
	I0407 15:05:37.731227       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0407 15:05:38.386025       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0407 15:05:41.647077       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [172.17.90.208]
	I0407 15:05:41.649841       1 controller.go:615] quota admission added evaluator for: endpoints
	I0407 15:05:41.713098       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0407 15:05:43.250220       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0407 15:05:43.566418       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0407 15:05:44.713873       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0407 15:05:44.798940       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0407 15:06:50.571714       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	
	
	==> kube-controller-manager [4d467950a871] <==
	I0407 15:05:45.284180       1 shared_informer.go:320] Caches are synced for endpoint_slice_mirroring
	I0407 15:05:45.284244       1 shared_informer.go:320] Caches are synced for endpoint_slice
	I0407 15:05:45.284625       1 shared_informer.go:320] Caches are synced for PV protection
	I0407 15:05:45.334979       1 shared_informer.go:320] Caches are synced for resource quota
	I0407 15:05:45.335278       1 shared_informer.go:320] Caches are synced for deployment
	I0407 15:05:45.335947       1 shared_informer.go:320] Caches are synced for disruption
	I0407 15:05:45.336228       1 shared_informer.go:320] Caches are synced for PVC protection
	I0407 15:05:45.336507       1 shared_informer.go:320] Caches are synced for bootstrap_signer
	I0407 15:05:45.336976       1 shared_informer.go:320] Caches are synced for crt configmap
	I0407 15:05:45.284643       1 shared_informer.go:320] Caches are synced for ephemeral
	I0407 15:05:45.284652       1 shared_informer.go:320] Caches are synced for ClusterRoleAggregator
	I0407 15:05:45.284665       1 shared_informer.go:320] Caches are synced for validatingadmissionpolicy-status
	I0407 15:05:45.284674       1 shared_informer.go:320] Caches are synced for legacy-service-account-token-cleaner
	I0407 15:05:45.341676       1 shared_informer.go:320] Caches are synced for job
	I0407 15:05:45.341850       1 shared_informer.go:320] Caches are synced for garbage collector
	I0407 15:05:45.342100       1 shared_informer.go:320] Caches are synced for attach detach
	I0407 15:05:45.342483       1 shared_informer.go:320] Caches are synced for GC
	I0407 15:05:45.342574       1 shared_informer.go:320] Caches are synced for daemon sets
	I0407 15:05:45.438247       1 shared_informer.go:320] Caches are synced for ReplicaSet
	I0407 15:05:45.438583       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="176.101µs"
	I0407 15:06:50.494857       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-061700"
	I0407 15:06:50.520694       1 range_allocator.go:247] "Successfully synced" logger="node-ipam-controller" key="pause-061700"
	I0407 15:06:50.610392       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="87.480154ms"
	I0407 15:06:50.610562       1 replica_set.go:679] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-668d6bf9bc" duration="53.6µs"
	I0407 15:06:50.654226       1 node_lifecycle_controller.go:1038] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-controller-manager [f6cdc91b933e] <==
	
	
	==> kube-proxy [21a40c3fbaf4] <==
	
	
	==> kube-proxy [68af283bf179] <==
		add table ip kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	E0407 15:05:41.321463       1 proxier.go:733] "Error cleaning up nftables rules" err=<
		could not run nftables command: /dev/stdin:1:1-25: Error: Could not process rule: Operation not supported
		add table ip6 kube-proxy
		^^^^^^^^^^^^^^^^^^^^^^^^^
	 >
	I0407 15:05:41.610893       1 server.go:698] "Successfully retrieved node IP(s)" IPs=["172.17.90.208"]
	E0407 15:05:41.610987       1 server.go:234] "Kube-proxy configuration may be incomplete or incorrect" err="nodePortAddresses is unset; NodePort connections will be accepted on all local IPs. Consider using `--nodeport-addresses primary`"
	I0407 15:05:41.685522       1 server_linux.go:147] "No iptables support for family" ipFamily="IPv6"
	I0407 15:05:41.685605       1 server.go:245] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0407 15:05:41.685637       1 server_linux.go:170] "Using iptables Proxier"
	I0407 15:05:41.689308       1 proxier.go:255] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses" ipFamily="IPv4"
	I0407 15:05:41.689669       1 server.go:497] "Version info" version="v1.32.2"
	I0407 15:05:41.689708       1 server.go:499] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0407 15:05:41.744819       1 config.go:199] "Starting service config controller"
	I0407 15:05:41.744888       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0407 15:05:41.744947       1 config.go:105] "Starting endpoint slice config controller"
	I0407 15:05:41.744973       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0407 15:05:41.744961       1 config.go:329] "Starting node config controller"
	I0407 15:05:41.745002       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0407 15:05:41.845816       1 shared_informer.go:320] Caches are synced for service config
	I0407 15:05:41.845822       1 shared_informer.go:320] Caches are synced for node config
	I0407 15:05:41.845833       1 shared_informer.go:320] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [4a599a9e7455] <==
	E0407 15:05:37.545561       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csistoragecapacities\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	E0407 15:05:37.545756       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"statefulsets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.548584       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0407 15:05:37.548795       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"storageclasses\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.555410       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0407 15:05:37.555582       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumeclaims\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.555691       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0407 15:05:37.555771       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User \"system:kube-scheduler\" cannot list resource \"persistentvolumes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.555997       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0407 15:05:37.556087       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicationcontrollers\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.556308       1 reflector.go:569] runtime/asm_amd64.s:1700: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0407 15:05:37.556794       1 reflector.go:166] "Unhandled Error" err="runtime/asm_amd64.s:1700: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"extension-apiserver-authentication\" is forbidden: User \"system:kube-scheduler\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\"" logger="UnhandledError"
	W0407 15:05:37.558123       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0407 15:05:37.558264       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User \"system:kube-scheduler\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.558426       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0407 15:05:37.558519       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"csinodes\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.558658       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0407 15:05:37.558738       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User \"system:kube-scheduler\" cannot list resource \"namespaces\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.558939       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "volumeattachments" in API group "storage.k8s.io" at the cluster scope
	E0407 15:05:37.559035       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.VolumeAttachment: failed to list *v1.VolumeAttachment: volumeattachments.storage.k8s.io is forbidden: User \"system:kube-scheduler\" cannot list resource \"volumeattachments\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.559229       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0407 15:05:37.559324       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User \"system:kube-scheduler\" cannot list resource \"pods\" in API group \"\" at the cluster scope" logger="UnhandledError"
	W0407 15:05:37.559467       1 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0407 15:05:37.559556       1 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User \"system:kube-scheduler\" cannot list resource \"replicasets\" in API group \"apps\" at the cluster scope" logger="UnhandledError"
	I0407 15:05:39.135452       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [8f484a9c7afa] <==
	I0407 15:05:29.261338       1 serving.go:386] Generated self-signed cert in-memory
	
	
	==> kubelet <==
	Apr 07 15:05:36 pause-061700 kubelet[6296]: E0407 15:05:36.041692    6296 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061700\" not found" node="pause-061700"
	Apr 07 15:05:36 pause-061700 kubelet[6296]: E0407 15:05:36.042399    6296 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061700\" not found" node="pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.032752    6296 kubelet.go:3196] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"pause-061700\" not found" node="pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.457070    6296 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/etcd-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.631366    6296 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"etcd-pause-061700\" already exists" pod="kube-system/etcd-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.631425    6296 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.649613    6296 apiserver.go:52] "Watching apiserver"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.656935    6296 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.679743    6296 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-apiserver-pause-061700\" already exists" pod="kube-system/kube-apiserver-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.679932    6296 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.718360    6296 kubelet_node_status.go:125] "Node was previously registered" node="pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.718489    6296 kubelet_node_status.go:79] "Successfully registered node" node="pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.718527    6296 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.722552    6296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/8e6f0aec-e738-4538-bf90-80b15e69c731-lib-modules\") pod \"kube-proxy-7w9vv\" (UID: \"8e6f0aec-e738-4538-bf90-80b15e69c731\") " pod="kube-system/kube-proxy-7w9vv"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.722751    6296 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/8e6f0aec-e738-4538-bf90-80b15e69c731-xtables-lock\") pod \"kube-proxy-7w9vv\" (UID: \"8e6f0aec-e738-4538-bf90-80b15e69c731\") " pod="kube-system/kube-proxy-7w9vv"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.724314    6296 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.734696    6296 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-controller-manager-pause-061700\" already exists" pod="kube-system/kube-controller-manager-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: I0407 15:05:37.734791    6296 kubelet.go:3200] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-pause-061700"
	Apr 07 15:05:37 pause-061700 kubelet[6296]: E0407 15:05:37.763011    6296 kubelet.go:3202] "Failed creating a mirror pod" err="pods \"kube-scheduler-pause-061700\" already exists" pod="kube-system/kube-scheduler-pause-061700"
	Apr 07 15:05:39 pause-061700 kubelet[6296]: I0407 15:05:39.794770    6296 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b2da6ba15cffb604973e41cc2e49c457176c00cead7444c66c547ff7ad4650ff"
	Apr 07 15:05:39 pause-061700 kubelet[6296]: I0407 15:05:39.805484    6296 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b0dc1b9871e4be7d1fea10a590624b0a3b83ff430ace58432bdf928649f49458"
	Apr 07 15:06:03 pause-061700 systemd[1]: Stopping kubelet: The Kubernetes Node Agent...
	Apr 07 15:06:03 pause-061700 systemd[1]: kubelet.service: Deactivated successfully.
	Apr 07 15:06:03 pause-061700 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Apr 07 15:06:03 pause-061700 systemd[1]: kubelet.service: Consumed 1.743s CPU time.
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-061700 -n pause-061700
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.APIServer}} -p pause-061700 -n pause-061700: exit status 2 (13.24824s)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "pause-061700" apiserver is not running, skipping kubectl commands (state="Paused")
--- FAIL: TestPause/serial/Unpause (80.89s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (10800.434s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-windows-amd64.exe start -p false-004500 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperv
E0407 15:18:54.508625    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
panic: test timed out after 3h0m0s
	running tests:
		TestNetworkPlugins (27m45s)
		TestNetworkPlugins/group/auto (6m4s)
		TestNetworkPlugins/group/auto/Start (6m4s)
		TestNetworkPlugins/group/calico (3m45s)
		TestNetworkPlugins/group/calico/Start (3m45s)
		TestNetworkPlugins/group/custom-flannel (45s)
		TestNetworkPlugins/group/custom-flannel/Start (45s)
		TestNetworkPlugins/group/false (33s)
		TestNetworkPlugins/group/false/Start (33s)
		TestStartStop (27m31s)

                                                
                                                
goroutine 2332 [running]:
testing.(*M).startAlarm.func1()
	/usr/local/go/src/testing/testing.go:2484 +0x394
created by time.goFunc
	/usr/local/go/src/time/sleep.go:215 +0x2d

                                                
                                                
goroutine 1 [chan receive]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc000686540, 0xc00008bbc8)
	/usr/local/go/src/testing/testing.go:1798 +0x104
testing.runTests(0xc000716048, {0x56f72c0, 0x2b, 0x2b}, {0xffffffffffffffff?, 0xc000b5c0d0?, 0x571e640?})
	/usr/local/go/src/testing/testing.go:2277 +0x4b4
testing.(*M).Run(0xc00052e1e0)
	/usr/local/go/src/testing/testing.go:2142 +0x64a
k8s.io/minikube/test/integration.TestMain(0xc00052e1e0)
	/home/jenkins/workspace/Build_Cross/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0xa8

                                                
                                                
goroutine 2338 [syscall, 5 minutes]:
syscall.Syscall6(0x257e33a09d8?, 0x257bda70a38?, 0x2000?, 0xc000801808?, 0xc000b84000?, 0xc001699bf0?, 0x6b8659?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x440, {0xc000b85b6a?, 0x496, 0x70df1f?}, 0x2000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc0016186c8?, {0xc000b85b6a?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc0016186c8, {0xc000b85b6a, 0x496, 0x496})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bb8118, {0xc000b85b6a?, 0x656d3f?, 0x2a65520?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001b30540, {0x3c77c00, 0xc00001e090})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c77d80, 0xc001b30540}, {0x3c77c00, 0xc00001e090}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c77d80, 0xc001b30540})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001699f38?, {0x3c77d80?, 0xc001b30540?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c77d80, 0xc001b30540}, {0x3c77ce0, 0xc000bb8118}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0017560e0?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2304
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2210 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc001798540, 0x391f9d8)
	/usr/local/go/src/testing/testing.go:1798 +0x104
created by testing.(*T).Run in goroutine 2042
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2322 [select, 7 minutes]:
os/exec.(*Cmd).watchCtx(0xc001674480, 0xc001756ee0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2319
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2331 [syscall]:
syscall.Syscall(0xc001483d00?, 0x0?, 0x7a043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x418, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000797980?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000797980)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000797980)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc001c9dc00, 0xc000797980)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc001c9dc00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc001c9dc00, 0xc001b8c0f0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2150
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 136 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 135
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 128 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3cc9620)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 127
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 798 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:297 +0x19b
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 797
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2375 [syscall]:
syscall.Syscall6(0x257e30da898?, 0x257bda70a38?, 0x400?, 0xc000680008?, 0xc0014e0800?, 0xc0014f3bf0?, 0x6b8659?, 0xc200000000000000?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x76c, {0xc0014e09e9?, 0x217, 0x70df1f?}, 0x400?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc001c41448?, {0xc0014e09e9?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc001c41448, {0xc0014e09e9, 0x217, 0x217})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00001e050, {0xc0014e09e9?, 0x656d3f?, 0x2a65520?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001a4e180, {0x3c77c00, 0xc000688070})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c77d80, 0xc001a4e180}, {0x3c77c00, 0xc000688070}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c77d80, 0xc001a4e180})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0014f3f38?, {0x3c77d80?, 0xc001a4e180?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c77d80, 0xc001a4e180}, {0x3c77ce0, 0xc00001e050}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc001bb8540?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2331
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2148 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000107340)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000107340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000107340)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000107340, 0xc0004a0380)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 135 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3cb8810, 0xc0000781c0}, 0xc00143df50, 0xc00143df98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3cb8810, 0xc0000781c0}, 0xa0?, 0xc00143df50, 0xc00143df98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3cb8810?, 0xc0000781c0?}, 0x0?, 0x6e726562754b7b3a?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00143dfd0?, 0x7dcc04?, 0x6d614e2030303338?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 129
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 134 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc000888e90, 0x3b)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc0015a5d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ccc4e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc000888ec0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc000100008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00078c2d0, {0x3c796c0, 0xc000c047b0}, 0x1, 0xc0000781c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00078c2d0, 0x3b9aca00, 0x0, 0x1, 0xc0000781c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 129
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 129 [chan receive, 173 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc000888ec0, 0xc0000781c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 127
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2214 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00153f180)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00153f180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00153f180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00153f180, 0xc001d71400)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2320 [syscall, 3 minutes]:
syscall.Syscall6(0x257e30c23a8?, 0x257bda70108?, 0x800?, 0xc000600008?, 0xc001652800?, 0xc0015c5bf0?, 0x6b8659?, 0xc001683107?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x690, {0xc001652a25?, 0x5db, 0x70df1f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc001c40008?, {0xc001652a25?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc001c40008, {0xc001652a25, 0x5db, 0x5db})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc0008621f8, {0xc001652a25?, 0x656d3f?, 0x2a65520?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001b306f0, {0x3c77c00, 0xc0000c6b10})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c77d80, 0xc001b306f0}, {0x3c77c00, 0xc0000c6b10}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c77d80, 0xc001b306f0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0015c5eb0?, {0x3c77d80?, 0xc001b306f0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c77d80, 0xc001b306f0}, {0x3c77ce0, 0xc0008621f8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc001bb8700?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2319
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2372 [syscall]:
syscall.Syscall6(0x257e30c23a8?, 0x257bda70ed0?, 0x800?, 0xc000900008?, 0xc001653000?, 0xc000113bf0?, 0x6b8659?, 0x0?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x698, {0xc001653204?, 0x5fc, 0x70df1f?}, 0x800?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc001c40488?, {0xc001653204?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc001c40488, {0xc001653204, 0x5fc, 0x5fc})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00001e008, {0xc001653204?, 0x656d3f?, 0x2a65520?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001a4e090, {0x3c77c00, 0xc000688058})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c77d80, 0xc001a4e090}, {0x3c77c00, 0xc000688058}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0xc000113e90?, {0x3c77d80, 0xc001a4e090})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000113eb0?, {0x3c77d80?, 0xc001a4e090?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c77d80, 0xc001a4e090}, {0x3c77ce0, 0xc00001e008}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc001c76a10?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2371
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 651 [IO wait, 160 minutes]:
internal/poll.runtime_pollWait(0x257e34ff820, 0x72)
	/usr/local/go/src/runtime/netpoll.go:351 +0x85
internal/poll.(*pollDesc).wait(0x70cbb3?, 0x0?, 0x0)
	/usr/local/go/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.execIO(0xc00025c520, 0xc0015a7ba0)
	/usr/local/go/src/internal/poll/fd_windows.go:177 +0x105
internal/poll.(*FD).acceptOne(0xc00025c508, 0x524, {0xc000744780?, 0xc0015a7c00?, 0x7172e5?}, 0xc0015a7c34?)
	/usr/local/go/src/internal/poll/fd_windows.go:946 +0x65
internal/poll.(*FD).Accept(0xc00025c508, 0xc0015a7d80)
	/usr/local/go/src/internal/poll/fd_windows.go:980 +0x1b6
net.(*netFD).accept(0xc00025c508)
	/usr/local/go/src/net/fd_windows.go:182 +0x4b
net.(*TCPListener).accept(0xc001bf6380)
	/usr/local/go/src/net/tcpsock_posix.go:159 +0x1b
net.(*TCPListener).Accept(0xc001bf6380)
	/usr/local/go/src/net/tcpsock.go:380 +0x30
net/http.(*Server).Serve(0xc000c7c100, {0x3ca71c0, 0xc001bf6380})
	/usr/local/go/src/net/http/server.go:3424 +0x30c
net/http.(*Server).ListenAndServe(0xc000c7c100)
	/usr/local/go/src/net/http/server.go:3350 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(...)
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2230
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 648
	/home/jenkins/workspace/Build_Cross/test/integration/functional_test.go:2229 +0x129

                                                
                                                
goroutine 2095 [chan receive, 28 minutes]:
testing.tRunner.func1()
	/usr/local/go/src/testing/testing.go:1753 +0x486
testing.tRunner(0xc001842700, 0xc0015f4120)
	/usr/local/go/src/testing/testing.go:1798 +0x104
created by testing.(*T).Run in goroutine 1966
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2305 [syscall, 5 minutes]:
syscall.Syscall6(0x257bda7d0c0?, 0x257bda705a0?, 0x400?, 0xc000800008?, 0xc0000e9400?, 0xc0015ebbf0?, 0x6b8659?, 0x2?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x59c, {0xc0000e95ec?, 0x214, 0x70df1f?}, 0x400?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc001618248?, {0xc0000e95ec?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc001618248, {0xc0000e95ec, 0x214, 0x214})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000bb80c8, {0xc0000e95ec?, 0x656d3f?, 0x2a65520?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001b304e0, {0x3c77c00, 0xc0006881c0})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c77d80, 0xc001b304e0}, {0x3c77c00, 0xc0006881c0}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c77d80, 0xc001b304e0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc0015ebeb0?, {0x3c77d80?, 0xc001b304e0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c77d80, 0xc001b304e0}, {0x3c77ce0, 0xc000bb80c8}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc000b8a480?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2304
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2339 [select, 5 minutes]:
os/exec.(*Cmd).watchCtx(0xc001674300, 0xc0017aab60)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2304
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2211 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc001798e00)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001798e00)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc001798e00)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc001798e00, 0xc001d71340)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2147 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000505500)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000505500)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000505500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc000505500, 0xc0004a0300)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2151 [chan receive]:
testing.(*T).Run(0xc001c27c00, {0x2f84d5a?, 0x3c6f108?}, 0xc001bed320)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001c27c00)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc001c27c00, 0xc0004a0500)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2042 [chan receive, 28 minutes]:
testing.(*T).Run(0xc001c27dc0, {0x2f84d55?, 0x7a2c53?}, 0x391f9d8)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestStartStop(0xc001c27dc0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc001c27dc0, 0x391f7f8)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1164 [chan send, 144 minutes]:
os/exec.(*Cmd).watchCtx(0xc001b7a600, 0xc00080a7e0)
	/usr/local/go/src/os/exec/exec.go:814 +0x3e5
created by os/exec.(*Cmd).Start in goroutine 782
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 797 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x3cb8810, 0xc0000781c0}, 0xc00145bf50, 0xc00145bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x3cb8810, 0xc0000781c0}, 0x90?, 0xc00145bf50, 0xc00145bf98)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x3cb8810?, 0xc0000781c0?}, 0x0?, 0x0?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:200 +0x45
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00145bfd0?, 0x7dcc04?, 0xc001d70a80?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/poll.go:187 +0x36
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 873
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:145 +0x27a

                                                
                                                
goroutine 2152 [chan receive, 5 minutes]:
testing.(*T).Run(0xc000606700, {0x2f84d5a?, 0x3c6f108?}, 0xc001b301e0)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc000606700)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc000606700, 0xc0004a0580)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2321 [syscall]:
syscall.Syscall6(0x257bda70108?, 0x10000?, 0x4000?, 0xc000580008?, 0xc00189e000?, 0xc001549bf0?, 0x6b8665?, 0x3233323720202020?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x46c, {0xc0018a95ba?, 0x4a46, 0x70df1f?}, 0x10000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc001c40d88?, {0xc0018a95ba?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc001c40d88, {0xc0018a95ba, 0x4a46, 0x4a46})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc000862228, {0xc0018a95ba?, 0x54f8?, 0x54f8?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001b30720, {0x3c77c00, 0xc000bb8188})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c77d80, 0xc001b30720}, {0x3c77c00, 0xc000bb8188}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c77d80, 0xc001b30720})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001549eb0?, {0x3c77d80?, 0xc001b30720?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c77d80, 0xc001b30720}, {0x3c77ce0, 0xc000862228}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc00163a230?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2319
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2212 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc0017996c0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc0017996c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0017996c0)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc0017996c0, 0xc001d71380)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 796 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc001d711d0, 0x36)
	/usr/local/go/src/runtime/sema.go:597 +0x15d
sync.(*Cond).Wait(0xc001567d80?)
	/usr/local/go/src/sync/cond.go:71 +0x85
k8s.io/client-go/util/workqueue.(*Typed[...]).Get(0x3ccc4e0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/queue.go:277 +0x86
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc001d71200)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:159 +0x44
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:154
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0000d8008?)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00068cbf0, {0x3c796c0, 0xc000a1d170}, 0x1, 0xc0000781c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00068cbf0, 0x3b9aca00, 0x0, 0x1, 0xc0000781c0)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/home/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.32.2/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 873
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:143 +0x1cf

                                                
                                                
goroutine 872 [select]:
k8s.io/client-go/util/workqueue.(*delayingType[...]).waitingLoop(0x3cc9620)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:311 +0x345
created by k8s.io/client-go/util/workqueue.newDelayingQueue[...] in goroutine 868
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/util/workqueue/delaying_queue.go:148 +0x245

                                                
                                                
goroutine 873 [chan receive, 150 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc001d71200, 0xc0000781c0)
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cert_rotation.go:150 +0x289
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 868
	/home/jenkins/go/pkg/mod/k8s.io/client-go@v0.32.2/transport/cache.go:122 +0x569

                                                
                                                
goroutine 2146 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00164ca80)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00164ca80)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc00164ca80)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc00164ca80, 0xc0004a0280)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2376 [syscall]:
syscall.Syscall6(0x257e3508800?, 0x257bda70108?, 0x2000?, 0xc00006b808?, 0xc0018ae000?, 0xc001b71bf0?, 0x6b8659?, 0x66a23c?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x680, {0xc0018afaf5?, 0x50b, 0x70df1f?}, 0x2000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc001c41b08?, {0xc0018afaf5?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc001c41b08, {0xc0018afaf5, 0x50b, 0x50b})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00001e080, {0xc0018afaf5?, 0x656d3f?, 0x2a65520?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001a4e1e0, {0x3c77c00, 0xc0007fa030})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c77d80, 0xc001a4e1e0}, {0x3c77c00, 0xc0007fa030}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c77d80, 0xc001a4e1e0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc001b71eb0?, {0x3c77d80?, 0xc001a4e1e0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c77d80, 0xc001a4e1e0}, {0x3c77ce0, 0xc00001e080}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0017ab180?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2331
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2216 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc000489180)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc000489180)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc000489180)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc000489180, 0xc001d714c0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2319 [syscall, 7 minutes]:
syscall.Syscall(0xc0018ddd00?, 0x0?, 0x7a043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x644, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc001674480?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001674480)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc001674480)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000489340, 0xc001674480)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000489340)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc000489340, 0xc001b30600)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2096
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2096 [chan receive, 7 minutes]:
testing.(*T).Run(0xc001842fc0, {0x2f84d5a?, 0x3c6f108?}, 0xc001b30600)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001842fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc001842fc0, 0xc0004a0100)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2215 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00153f340)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00153f340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00153f340)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00153f340, 0xc001d71440)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2213 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc00153e540)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc00153e540)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc00153e540)
	/home/jenkins/workspace/Build_Cross/test/integration/start_stop_delete_test.go:92 +0x45
testing.tRunner(0xc00153e540, 0xc001d713c0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2210
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2150 [chan receive]:
testing.(*T).Run(0xc001c27880, {0x2f84d5a?, 0x3c6f108?}, 0xc001b8c0f0)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001c27880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:111 +0x5bb
testing.tRunner(0xc001c27880, 0xc0004a0480)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2371 [syscall]:
syscall.Syscall(0xc000b7dd00?, 0x0?, 0x7a043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x5a8, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc000797800?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc000797800)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc000797800)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000489880, 0xc000797800)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000489880)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc000489880, 0xc001bed320)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2151
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2097 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc001843340)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001843340)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001843340)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001843340, 0xc0004a0180)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2149 [chan receive, 28 minutes]:
testing.(*testState).waitParallel(0xc0008652c0)
	/usr/local/go/src/testing/testing.go:1926 +0xaf
testing.(*T).Parallel(0xc001c276c0)
	/usr/local/go/src/testing/testing.go:1578 +0x225
k8s.io/minikube/test/integration.MaybeParallel(0xc001c276c0)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:483 +0x34
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1(0xc001c276c0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:106 +0x317
testing.tRunner(0xc001c276c0, 0xc0004a0400)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2095
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2304 [syscall, 5 minutes]:
syscall.Syscall(0xc0014d3d00?, 0x0?, 0x7a043b?, 0x1000000000000?, 0x1e?)
	/usr/local/go/src/runtime/syscall_windows.go:457 +0x29
syscall.WaitForSingleObject(0x718, 0xffffffff)
	/usr/local/go/src/syscall/zsyscall_windows.go:1149 +0x5a
os.(*Process).wait(0xc001674300?)
	/usr/local/go/src/os/exec_windows.go:28 +0xe6
os.(*Process).Wait(...)
	/usr/local/go/src/os/exec.go:358
os/exec.(*Cmd).Wait(0xc001674300)
	/usr/local/go/src/os/exec/exec.go:922 +0x45
os/exec.(*Cmd).Run(0xc001674300)
	/usr/local/go/src/os/exec/exec.go:626 +0x2d
k8s.io/minikube/test/integration.Run(0xc000489500, 0xc001674300)
	/home/jenkins/workspace/Build_Cross/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.TestNetworkPlugins.func1.1.1(0xc000489500)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:112 +0x4b
testing.tRunner(0xc000489500, 0xc001b301e0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 2152
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 1966 [chan receive, 28 minutes]:
testing.(*T).Run(0xc001c26fc0, {0x2f84d55?, 0xc00143ff60?}, 0xc0015f4120)
	/usr/local/go/src/testing/testing.go:1859 +0x414
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc001c26fc0)
	/home/jenkins/workspace/Build_Cross/test/integration/net_test.go:52 +0xd3
testing.tRunner(0xc001c26fc0, 0x391f7b0)
	/usr/local/go/src/testing/testing.go:1792 +0xcb
created by testing.(*T).Run in goroutine 1
	/usr/local/go/src/testing/testing.go:1851 +0x3f6

                                                
                                                
goroutine 2374 [select]:
os/exec.(*Cmd).watchCtx(0xc000797800, 0xc001bb8620)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2371
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                                
goroutine 2373 [syscall]:
syscall.Syscall6(0x257e3506fa0?, 0x257bda70ed0?, 0x2000?, 0xc000900808?, 0xc0017a8000?, 0xc000a5fbf0?, 0x6b8659?, 0x66a23c?)
	/usr/local/go/src/runtime/syscall_windows.go:463 +0x38
syscall.readFile(0x7f8, {0xc0017a9be2?, 0x41e, 0x70df1f?}, 0x2000?, 0x800000?)
	/usr/local/go/src/syscall/zsyscall_windows.go:1020 +0x8b
syscall.ReadFile(...)
	/usr/local/go/src/syscall/syscall_windows.go:451
syscall.Read(0xc001c40908?, {0xc0017a9be2?, 0x0?, 0x0?})
	/usr/local/go/src/syscall/syscall_windows.go:430 +0x2d
internal/poll.(*FD).Read(0xc001c40908, {0xc0017a9be2, 0x41e, 0x41e})
	/usr/local/go/src/internal/poll/fd_windows.go:424 +0x1b5
os.(*File).read(...)
	/usr/local/go/src/os/file_posix.go:29
os.(*File).Read(0xc00001e028, {0xc0017a9be2?, 0x656d3f?, 0x2a65520?})
	/usr/local/go/src/os/file.go:124 +0x4f
bytes.(*Buffer).ReadFrom(0xc001a4e0c0, {0x3c77c00, 0xc0007fa020})
	/usr/local/go/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x3c77d80, 0xc001a4e0c0}, {0x3c77c00, 0xc0007fa020}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:415 +0x151
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os.genericWriteTo(0x0?, {0x3c77d80, 0xc001a4e0c0})
	/usr/local/go/src/os/file.go:275 +0x4f
os.(*File).WriteTo(0xc000a5feb0?, {0x3c77d80?, 0xc001a4e0c0?})
	/usr/local/go/src/os/file.go:253 +0x49
io.copyBuffer({0x3c77d80, 0xc001a4e0c0}, {0x3c77ce0, 0xc00001e028}, {0x0, 0x0, 0x0})
	/usr/local/go/src/io/io.go:411 +0x9d
io.Copy(...)
	/usr/local/go/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/usr/local/go/src/os/exec/exec.go:596 +0x34
os/exec.(*Cmd).Start.func2(0xc0017aae00?)
	/usr/local/go/src/os/exec/exec.go:749 +0x2c
created by os/exec.(*Cmd).Start in goroutine 2371
	/usr/local/go/src/os/exec/exec.go:748 +0x9c5

                                                
                                                
goroutine 2377 [select]:
os/exec.(*Cmd).watchCtx(0xc000797980, 0xc001bb87e0)
	/usr/local/go/src/os/exec/exec.go:789 +0xb2
created by os/exec.(*Cmd).Start in goroutine 2331
	/usr/local/go/src/os/exec/exec.go:775 +0x989

                                                
                                    

Test pass (163/209)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 17.3
4 TestDownloadOnly/v1.20.0/preload-exists 0.07
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.34
9 TestDownloadOnly/v1.20.0/DeleteAll 0.68
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.81
12 TestDownloadOnly/v1.32.2/json-events 12.99
13 TestDownloadOnly/v1.32.2/preload-exists 0.03
16 TestDownloadOnly/v1.32.2/kubectl 0
17 TestDownloadOnly/v1.32.2/LogsDuration 0.42
18 TestDownloadOnly/v1.32.2/DeleteAll 0.73
19 TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds 0.79
21 TestBinaryMirror 9.97
22 TestOffline 286.39
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.28
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.26
27 TestAddons/Setup 433.13
29 TestAddons/serial/Volcano 66.49
31 TestAddons/serial/GCPAuth/Namespaces 0.48
32 TestAddons/serial/GCPAuth/FakeCredentials 10.72
35 TestAddons/parallel/Registry 33.17
36 TestAddons/parallel/Ingress 66.43
37 TestAddons/parallel/InspektorGadget 27.85
38 TestAddons/parallel/MetricsServer 22.03
40 TestAddons/parallel/CSI 79.01
41 TestAddons/parallel/Headlamp 40.91
42 TestAddons/parallel/CloudSpanner 13.55
43 TestAddons/parallel/LocalPath 32.1
44 TestAddons/parallel/NvidiaDevicePlugin 21.93
45 TestAddons/parallel/Yakd 28.06
47 TestAddons/StoppedEnableDisable 53.48
49 TestCertExpiration 937.62
50 TestDockerFlags 418.51
51 TestForceSystemdFlag 410.64
52 TestForceSystemdEnv 412.03
59 TestErrorSpam/start 17.04
60 TestErrorSpam/status 36.79
61 TestErrorSpam/pause 23.92
62 TestErrorSpam/unpause 23.78
63 TestErrorSpam/stop 61.8
66 TestFunctional/serial/CopySyncFile 0.04
67 TestFunctional/serial/StartWithProxy 231.08
68 TestFunctional/serial/AuditLog 0
69 TestFunctional/serial/SoftStart 124.81
70 TestFunctional/serial/KubeContext 0.13
71 TestFunctional/serial/KubectlGetPods 0.23
74 TestFunctional/serial/CacheCmd/cache/add_remote 26.26
75 TestFunctional/serial/CacheCmd/cache/add_local 10.64
76 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.29
77 TestFunctional/serial/CacheCmd/cache/list 0.27
78 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 9.44
79 TestFunctional/serial/CacheCmd/cache/cache_reload 36.8
80 TestFunctional/serial/CacheCmd/cache/delete 0.54
81 TestFunctional/serial/MinikubeKubectlCmd 0.52
82 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.89
83 TestFunctional/serial/ExtraConfig 126.67
84 TestFunctional/serial/ComponentHealth 0.18
85 TestFunctional/serial/LogsCmd 8.38
86 TestFunctional/serial/LogsFileCmd 10.59
87 TestFunctional/serial/InvalidService 21.02
89 TestFunctional/parallel/ConfigCmd 1.89
93 TestFunctional/parallel/StatusCmd 43.3
97 TestFunctional/parallel/ServiceCmdConnect 27.34
98 TestFunctional/parallel/AddonsCmd 0.62
99 TestFunctional/parallel/PersistentVolumeClaim 44.06
101 TestFunctional/parallel/SSHCmd 22.51
102 TestFunctional/parallel/CpCmd 61.8
103 TestFunctional/parallel/MySQL 58.16
104 TestFunctional/parallel/FileSync 10.53
105 TestFunctional/parallel/CertSync 67.09
109 TestFunctional/parallel/NodeLabels 0.22
111 TestFunctional/parallel/NonActiveRuntimeDisabled 12.01
113 TestFunctional/parallel/License 1.79
114 TestFunctional/parallel/DockerEnv/powershell 47.13
115 TestFunctional/parallel/UpdateContextCmd/no_changes 2.98
116 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 2.65
117 TestFunctional/parallel/UpdateContextCmd/no_clusters 2.57
118 TestFunctional/parallel/Version/short 0.25
119 TestFunctional/parallel/Version/components 8.23
120 TestFunctional/parallel/ServiceCmd/DeployApp 17.51
121 TestFunctional/parallel/ServiceCmd/List 14.27
122 TestFunctional/parallel/ServiceCmd/JSONOutput 14.24
125 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 9.46
126 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
128 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 13.73
135 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.21
137 TestFunctional/parallel/ProfileCmd/profile_not_create 14.1
138 TestFunctional/parallel/ProfileCmd/profile_list 15.05
139 TestFunctional/parallel/ImageCommands/ImageListShort 8.15
140 TestFunctional/parallel/ImageCommands/ImageListTable 8
141 TestFunctional/parallel/ImageCommands/ImageListJson 7.96
142 TestFunctional/parallel/ImageCommands/ImageListYaml 8.03
143 TestFunctional/parallel/ImageCommands/ImageBuild 28.56
144 TestFunctional/parallel/ImageCommands/Setup 2.38
145 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 19.17
146 TestFunctional/parallel/ProfileCmd/profile_json_output 14.58
147 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 16.99
148 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 16.53
149 TestFunctional/parallel/ImageCommands/ImageSaveToFile 7.66
150 TestFunctional/parallel/ImageCommands/ImageRemove 14.79
151 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 14.87
152 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 7.75
153 TestFunctional/delete_echo-server_images 0.22
154 TestFunctional/delete_my-image_image 0.08
155 TestFunctional/delete_minikube_cached_images 0.09
160 TestMultiControlPlane/serial/StartCluster 706.64
161 TestMultiControlPlane/serial/DeployApp 15.01
163 TestMultiControlPlane/serial/AddWorkerNode 261.74
164 TestMultiControlPlane/serial/NodeLabels 0.19
165 TestMultiControlPlane/serial/HAppyAfterClusterStart 48.87
166 TestMultiControlPlane/serial/CopyFile 641.37
170 TestImageBuild/serial/Setup 192.61
171 TestImageBuild/serial/NormalBuild 10.45
172 TestImageBuild/serial/BuildWithBuildArg 8.76
173 TestImageBuild/serial/BuildWithDockerIgnore 8.09
174 TestImageBuild/serial/BuildWithSpecifiedDockerfile 8.12
178 TestJSONOutput/start/Command 199.03
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 7.95
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 7.77
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 34.13
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.97
206 TestMainNoArgs 0.23
207 TestMinikubeProfile 527.31
210 TestMountStart/serial/StartWithMountFirst 151.53
211 TestMountStart/serial/VerifyMountFirst 9.43
212 TestMountStart/serial/StartWithMountSecond 153.28
213 TestMountStart/serial/VerifyMountSecond 9.36
214 TestMountStart/serial/DeleteFirst 30.31
215 TestMountStart/serial/VerifyMountPostDelete 9.38
216 TestMountStart/serial/Stop 26.16
217 TestMountStart/serial/RestartStopped 117.97
218 TestMountStart/serial/VerifyMountPostStop 9.51
221 TestMultiNode/serial/FreshStart2Nodes 430.34
222 TestMultiNode/serial/DeployApp2Nodes 9.98
224 TestMultiNode/serial/AddNode 239.5
225 TestMultiNode/serial/MultiNodeLabels 0.17
226 TestMultiNode/serial/ProfileList 35.8
227 TestMultiNode/serial/CopyFile 371.86
228 TestMultiNode/serial/StopNode 82.16
229 TestMultiNode/serial/StartAfterStop 205.48
234 TestPreload 539.01
235 TestScheduledStopWindows 337.14
240 TestRunningBinaryUpgrade 1105.26
245 TestNoKubernetes/serial/StartNoK8sWithVersion 0.39
247 TestStoppedBinaryUpgrade/Setup 0.96
248 TestStoppedBinaryUpgrade/Upgrade 925.04
268 TestPause/serial/Start 549.14
269 TestPause/serial/SecondStartNoReconfiguration 320.18
270 TestStoppedBinaryUpgrade/MinikubeLogs 10.08
271 TestPause/serial/Pause 9.13
272 TestPause/serial/VerifyStatus 13.19
x
+
TestDownloadOnly/v1.20.0/json-events (17.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-368300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-368300 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperv: (17.2977581s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (17.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
I0407 12:19:14.064798    7728 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
I0407 12:19:14.137866    7728 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.07s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.34s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-368300
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-368300: exit status 85 (344.3841ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-368300 | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:18 UTC |          |
	|         | -p download-only-368300        |                      |                   |         |                     |          |
	|         | --force --alsologtostderr      |                      |                   |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |          |
	|         | --container-runtime=docker     |                      |                   |         |                     |          |
	|         | --driver=hyperv                |                      |                   |         |                     |          |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:18:56
	Running on machine: minikube3
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:18:56.880401    2492 out.go:345] Setting OutFile to fd 728 ...
	I0407 12:18:56.957405    2492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:18:56.957405    2492 out.go:358] Setting ErrFile to fd 732...
	I0407 12:18:56.957405    2492 out.go:392] TERM=,COLORTERM=, which probably does not support color
	W0407 12:18:56.970405    2492 root.go:314] Error reading config file at C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\config\config.json: The system cannot find the path specified.
	I0407 12:18:56.979410    2492 out.go:352] Setting JSON to true
	I0407 12:18:56.984404    2492 start.go:129] hostinfo: {"hostname":"minikube3","uptime":129,"bootTime":1744028207,"procs":176,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 12:18:56.984404    2492 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 12:18:56.992438    2492 out.go:97] [download-only-368300] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 12:18:56.992438    2492 notify.go:220] Checking for updates...
	W0407 12:18:56.992438    2492 preload.go:293] Failed to list preload files: open C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball: The system cannot find the file specified.
	I0407 12:18:56.995405    2492 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 12:18:56.998285    2492 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 12:18:57.001272    2492 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:18:57.003267    2492 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0407 12:18:57.009628    2492 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:18:57.010425    2492 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:19:02.322861    2492 out.go:97] Using the hyperv driver based on user configuration
	I0407 12:19:02.322966    2492 start.go:297] selected driver: hyperv
	I0407 12:19:02.322966    2492 start.go:901] validating driver "hyperv" against <nil>
	I0407 12:19:02.322966    2492 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:19:02.373157    2492 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0407 12:19:02.374028    2492 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:19:02.374779    2492 cni.go:84] Creating CNI manager for ""
	I0407 12:19:02.374779    2492 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0407 12:19:02.374779    2492 start.go:340] cluster config:
	{Name:download-only-368300 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-368300 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:19:02.377007    2492 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:19:02.380155    2492 out.go:97] Downloading VM boot image ...
	I0407 12:19:02.380712    2492 download.go:108] Downloading: https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso?checksum=file:https://storage.googleapis.com/minikube/iso/minikube-v1.35.0-amd64.iso.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\iso\amd64\minikube-v1.35.0-amd64.iso
	I0407 12:19:05.724395    2492 out.go:97] Starting "download-only-368300" primary control-plane node in "download-only-368300" cluster
	I0407 12:19:05.724395    2492 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:19:05.770355    2492 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0407 12:19:05.770355    2492 cache.go:56] Caching tarball of preloaded images
	I0407 12:19:05.770536    2492 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:19:05.774012    2492 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0407 12:19:05.774122    2492 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0407 12:19:05.851710    2492 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0407 12:19:08.544371    2492 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0407 12:19:08.546104    2492 preload.go:254] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0407 12:19:09.548515    2492 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0407 12:19:09.549601    2492 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-368300\config.json ...
	I0407 12:19:09.550193    2492 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-368300\config.json: {Name:mk8c3bb6505e43c077f593a93d026a3002ec4d99 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:19:09.551808    2492 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0407 12:19:09.553838    2492 download.go:108] Downloading: https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\amd64\v1.20.0/kubectl.exe
	
	
	* The control-plane node download-only-368300 host does not exist
	  To start a cluster, run: "minikube start -p download-only-368300"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.34s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.68s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.68s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.81s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-368300
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.81s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/json-events (12.99s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-384800 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=hyperv
aaa_download_only_test.go:81: (dbg) Done: out/minikube-windows-amd64.exe start -o=json --download-only -p download-only-384800 --force --alsologtostderr --kubernetes-version=v1.32.2 --container-runtime=docker --driver=hyperv: (12.9875738s)
--- PASS: TestDownloadOnly/v1.32.2/json-events (12.99s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/preload-exists (0.03s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/preload-exists
I0407 12:19:28.969041    7728 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
I0407 12:19:28.995158    7728 preload.go:146] Found local preload: C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
--- PASS: TestDownloadOnly/v1.32.2/preload-exists (0.03s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/kubectl
--- PASS: TestDownloadOnly/v1.32.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/LogsDuration (0.42s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe logs -p download-only-384800
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-windows-amd64.exe logs -p download-only-384800: exit status 85 (414.7826ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |       User        | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-368300 | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:18 UTC |                     |
	|         | -p download-only-368300        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	| delete  | --all                          | minikube             | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:19 UTC | 07 Apr 25 12:19 UTC |
	| delete  | -p download-only-368300        | download-only-368300 | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:19 UTC | 07 Apr 25 12:19 UTC |
	| start   | -o=json --download-only        | download-only-384800 | minikube3\jenkins | v1.35.0 | 07 Apr 25 12:19 UTC |                     |
	|         | -p download-only-384800        |                      |                   |         |                     |                     |
	|         | --force --alsologtostderr      |                      |                   |         |                     |                     |
	|         | --kubernetes-version=v1.32.2   |                      |                   |         |                     |                     |
	|         | --container-runtime=docker     |                      |                   |         |                     |                     |
	|         | --driver=hyperv                |                      |                   |         |                     |                     |
	|---------|--------------------------------|----------------------|-------------------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2025/04/07 12:19:16
	Running on machine: minikube3
	Binary: Built with gc go1.24.0 for windows/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0407 12:19:16.103848   13516 out.go:345] Setting OutFile to fd 740 ...
	I0407 12:19:16.183940   13516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:19:16.183940   13516 out.go:358] Setting ErrFile to fd 748...
	I0407 12:19:16.183940   13516 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:19:16.202951   13516 out.go:352] Setting JSON to true
	I0407 12:19:16.206486   13516 start.go:129] hostinfo: {"hostname":"minikube3","uptime":148,"bootTime":1744028207,"procs":177,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 12:19:16.206486   13516 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 12:19:16.211685   13516 out.go:97] [download-only-384800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 12:19:16.211685   13516 notify.go:220] Checking for updates...
	I0407 12:19:16.214249   13516 out.go:169] KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 12:19:16.217521   13516 out.go:169] MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 12:19:16.220280   13516 out.go:169] MINIKUBE_LOCATION=20598
	I0407 12:19:16.222921   13516 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	W0407 12:19:16.226138   13516 out.go:321] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0407 12:19:16.228761   13516 driver.go:394] Setting default libvirt URI to qemu:///system
	I0407 12:19:21.493317   13516 out.go:97] Using the hyperv driver based on user configuration
	I0407 12:19:21.493411   13516 start.go:297] selected driver: hyperv
	I0407 12:19:21.493558   13516 start.go:901] validating driver "hyperv" against <nil>
	I0407 12:19:21.493859   13516 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0407 12:19:21.542283   13516 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=65534MB, container=0MB
	I0407 12:19:21.543315   13516 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0407 12:19:21.543315   13516 cni.go:84] Creating CNI manager for ""
	I0407 12:19:21.543571   13516 cni.go:158] "hyperv" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0407 12:19:21.543600   13516 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0407 12:19:21.543811   13516 start.go:340] cluster config:
	{Name:download-only-384800 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.46-1743675393-20591@sha256:8de8167e280414c9d16e4c3da59bc85bc7c9cc24228af995863fc7bcabfcf727 Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperv HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.32.2 ClusterName:download-only-384800 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Co
ntainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.32.2 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:C:\Users\jenkins.minikube3:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0407 12:19:21.543811   13516 iso.go:125] acquiring lock: {Name:mk99bbb6a54210c1995fdf151b41c83b57c3735b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0407 12:19:21.548521   13516 out.go:97] Starting "download-only-384800" primary control-plane node in "download-only-384800" cluster
	I0407 12:19:21.548579   13516 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:19:21.603107   13516 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 12:19:21.603223   13516 cache.go:56] Caching tarball of preloaded images
	I0407 12:19:21.604007   13516 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:19:21.607140   13516 out.go:97] Downloading Kubernetes v1.32.2 preload ...
	I0407 12:19:21.607188   13516 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 ...
	I0407 12:19:21.681105   13516 download.go:108] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.32.2/preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4?checksum=md5:c3fdd273d8c9002513e1c87be8fe9ffc -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4
	I0407 12:19:24.255687   13516 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 ...
	I0407 12:19:24.256673   13516 preload.go:254] verifying checksum of C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\preloaded-tarball\preloaded-images-k8s-v18-v1.32.2-docker-overlay2-amd64.tar.lz4 ...
	I0407 12:19:25.097674   13516 cache.go:59] Finished verifying existence of preloaded tar for v1.32.2 on docker
	I0407 12:19:25.097897   13516 profile.go:143] Saving config to C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-384800\config.json ...
	I0407 12:19:25.098674   13516 lock.go:35] WriteFile acquiring C:\Users\jenkins.minikube3\minikube-integration\.minikube\profiles\download-only-384800\config.json: {Name:mk0ae6a74c1f7c22a052fbfdc3207d2f226d9f76 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0407 12:19:25.099504   13516 preload.go:131] Checking if preload exists for k8s version v1.32.2 and runtime docker
	I0407 12:19:25.100342   13516 download.go:108] Downloading: https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe.sha256 -> C:\Users\jenkins.minikube3\minikube-integration\.minikube\cache\windows\amd64\v1.32.2/kubectl.exe
	
	
	* The control-plane node download-only-384800 host does not exist
	  To start a cluster, run: "minikube start -p download-only-384800"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.32.2/LogsDuration (0.42s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAll (0.73s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-windows-amd64.exe delete --all
--- PASS: TestDownloadOnly/v1.32.2/DeleteAll (0.73s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.79s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-windows-amd64.exe delete -p download-only-384800
--- PASS: TestDownloadOnly/v1.32.2/DeleteAlwaysSucceeds (0.79s)

                                                
                                    
x
+
TestBinaryMirror (9.97s)

                                                
                                                
=== RUN   TestBinaryMirror
I0407 12:19:32.352970    7728 binary.go:74] Not caching binary, using https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe?checksum=file:https://dl.k8s.io/release/v1.32.2/bin/windows/amd64/kubectl.exe.sha256
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-windows-amd64.exe start --download-only -p binary-mirror-001200 --alsologtostderr --binary-mirror http://127.0.0.1:53245 --driver=hyperv
aaa_download_only_test.go:314: (dbg) Done: out/minikube-windows-amd64.exe start --download-only -p binary-mirror-001200 --alsologtostderr --binary-mirror http://127.0.0.1:53245 --driver=hyperv: (9.2647001s)
helpers_test.go:175: Cleaning up "binary-mirror-001200" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p binary-mirror-001200
--- PASS: TestBinaryMirror (9.97s)

                                                
                                    
x
+
TestOffline (286.39s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe start -p offline-docker-817400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv
aab_offline_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe start -p offline-docker-817400 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperv: (4m0.4983687s)
helpers_test.go:175: Cleaning up "offline-docker-817400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p offline-docker-817400
E0407 14:48:54.493800    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p offline-docker-817400: (45.8897821s)
--- PASS: TestOffline (286.39s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-823400
addons_test.go:939: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons enable dashboard -p addons-823400: exit status 85 (277.1133ms)

                                                
                                                
-- stdout --
	* Profile "addons-823400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-823400"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.28s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.26s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:950: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-823400
addons_test.go:950: (dbg) Non-zero exit: out/minikube-windows-amd64.exe addons disable dashboard -p addons-823400: exit status 85 (260.9383ms)

                                                
                                                
-- stdout --
	* Profile "addons-823400" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-823400"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.26s)

                                                
                                    
x
+
TestAddons/Setup (433.13s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe start -p addons-823400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher
addons_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe start -p addons-823400 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=nvidia-device-plugin --addons=yakd --addons=volcano --addons=amd-gpu-device-plugin --driver=hyperv --addons=ingress --addons=ingress-dns --addons=storage-provisioner-rancher: (7m13.1289999s)
--- PASS: TestAddons/Setup (433.13s)

                                                
                                    
x
+
TestAddons/serial/Volcano (66.49s)

                                                
                                                
=== RUN   TestAddons/serial/Volcano
addons_test.go:823: volcano-controller stabilized in 18.2438ms
addons_test.go:815: volcano-admission stabilized in 18.4487ms
addons_test.go:807: volcano-scheduler stabilized in 19.7197ms
addons_test.go:829: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-75fdd99bcf-4svkz" [32648407-90b2-4809-b1ae-48deef1d50e0] Running
addons_test.go:829: (dbg) TestAddons/serial/Volcano: app=volcano-scheduler healthy within 5.0061328s
addons_test.go:833: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-75d8f6b5c-74dlx" [7e93d53a-de1d-4a47-b660-c07278c9b244] Running
addons_test.go:833: (dbg) TestAddons/serial/Volcano: app=volcano-admission healthy within 5.0054703s
addons_test.go:837: (dbg) TestAddons/serial/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-86bdc5c9c-zq6f5" [e55878e6-0658-4567-ae92-f43dacaedda3] Running
addons_test.go:837: (dbg) TestAddons/serial/Volcano: app=volcano-controller healthy within 6.0087988s
addons_test.go:842: (dbg) Run:  kubectl --context addons-823400 delete -n volcano-system job volcano-admission-init
addons_test.go:848: (dbg) Run:  kubectl --context addons-823400 create -f testdata\vcjob.yaml
addons_test.go:856: (dbg) Run:  kubectl --context addons-823400 get vcjob -n my-volcano
addons_test.go:874: (dbg) TestAddons/serial/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [25490ea3-b038-45a3-8553-cc3f82ccadaf] Pending
helpers_test.go:344: "test-job-nginx-0" [25490ea3-b038-45a3-8553-cc3f82ccadaf] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [25490ea3-b038-45a3-8553-cc3f82ccadaf] Running
addons_test.go:874: (dbg) TestAddons/serial/Volcano: volcano.sh/job-name=test-job healthy within 23.0064708s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable volcano --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable volcano --alsologtostderr -v=1: (26.5691122s)
--- PASS: TestAddons/serial/Volcano (66.49s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.48s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:569: (dbg) Run:  kubectl --context addons-823400 create ns new-namespace
addons_test.go:583: (dbg) Run:  kubectl --context addons-823400 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.48s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/FakeCredentials (10.72s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/FakeCredentials
addons_test.go:614: (dbg) Run:  kubectl --context addons-823400 create -f testdata\busybox.yaml
addons_test.go:621: (dbg) Run:  kubectl --context addons-823400 create sa gcp-auth-test
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [61c5f76d-ccdc-4301-8098-46f7b0ae3eb3] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [61c5f76d-ccdc-4301-8098-46f7b0ae3eb3] Running
addons_test.go:627: (dbg) TestAddons/serial/GCPAuth/FakeCredentials: integration-test=busybox healthy within 9.0071956s
addons_test.go:633: (dbg) Run:  kubectl --context addons-823400 exec busybox -- /bin/sh -c "printenv GOOGLE_APPLICATION_CREDENTIALS"
addons_test.go:645: (dbg) Run:  kubectl --context addons-823400 describe sa gcp-auth-test
addons_test.go:659: (dbg) Run:  kubectl --context addons-823400 exec busybox -- /bin/sh -c "cat /google-app-creds.json"
addons_test.go:683: (dbg) Run:  kubectl --context addons-823400 exec busybox -- /bin/sh -c "printenv GOOGLE_CLOUD_PROJECT"
--- PASS: TestAddons/serial/GCPAuth/FakeCredentials (10.72s)

                                                
                                    
x
+
TestAddons/parallel/Registry (33.17s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:321: registry stabilized in 9.0026ms
addons_test.go:323: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-6c88467877-vj7w5" [b415ea07-0cb6-4884-a825-ca2a176b33b1] Running
addons_test.go:323: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.0051096s
addons_test.go:326: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-84ppn" [123a8773-5314-4576-be29-96af7558a19f] Running
addons_test.go:326: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.0063591s
addons_test.go:331: (dbg) Run:  kubectl --context addons-823400 delete po -l run=registry-test --now
addons_test.go:336: (dbg) Run:  kubectl --context addons-823400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:336: (dbg) Done: kubectl --context addons-823400 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (4.5856052s)
addons_test.go:350: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 ip
addons_test.go:350: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 ip: (2.805773s)
2025/04/07 12:29:01 [DEBUG] GET http://172.17.95.71:5000
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable registry --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable registry --alsologtostderr -v=1: (15.5359894s)
--- PASS: TestAddons/parallel/Registry (33.17s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (66.43s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-823400 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-823400 replace --force -f testdata\nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-823400 replace --force -f testdata\nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [20067ffc-4014-402e-b4ab-7efe9342f752] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [20067ffc-4014-402e-b4ab-7efe9342f752] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 13.0103629s
I0407 12:29:47.438408    7728 kapi.go:150] Service nginx in namespace default found.
addons_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": (10.7735578s)
addons_test.go:286: (dbg) Run:  kubectl --context addons-823400 replace --force -f testdata\ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 ip
addons_test.go:291: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 ip: (2.5498234s)
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 172.17.95.71
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable ingress-dns --alsologtostderr -v=1: (16.2582623s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable ingress --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable ingress --alsologtostderr -v=1: (21.6500759s)
--- PASS: TestAddons/parallel/Ingress (66.43s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (27.85s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-pv8k7" [1bd8a253-785a-4219-add4-476553d19419] Running
addons_test.go:762: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.0068044s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable inspektor-gadget --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable inspektor-gadget --alsologtostderr -v=1: (21.843026s)
--- PASS: TestAddons/parallel/InspektorGadget (27.85s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (22.03s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:394: metrics-server stabilized in 10.0186ms
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-7fbb699795-tsv7f" [88bcb294-56c6-443e-8db9-ab7fb716d444] Running
addons_test.go:396: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0041369s
addons_test.go:402: (dbg) Run:  kubectl --context addons-823400 top pods -n kube-system
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable metrics-server --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable metrics-server --alsologtostderr -v=1: (16.8123481s)
--- PASS: TestAddons/parallel/MetricsServer (22.03s)

                                                
                                    
x
+
TestAddons/parallel/CSI (79.01s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
I0407 12:29:05.488810    7728 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
I0407 12:29:05.497649    7728 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
I0407 12:29:05.497649    7728 kapi.go:107] duration metric: took 8.8969ms to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
addons_test.go:488: csi-hostpath-driver pods stabilized in 8.8969ms
addons_test.go:491: (dbg) Run:  kubectl --context addons-823400 create -f testdata\csi-hostpath-driver\pvc.yaml
addons_test.go:496: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:501: (dbg) Run:  kubectl --context addons-823400 create -f testdata\csi-hostpath-driver\pv-pod.yaml
addons_test.go:506: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [10b79cf0-d07e-47d4-bb33-73e34edd83be] Pending
helpers_test.go:344: "task-pv-pod" [10b79cf0-d07e-47d4-bb33-73e34edd83be] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [10b79cf0-d07e-47d4-bb33-73e34edd83be] Running
addons_test.go:506: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 9.0052489s
addons_test.go:511: (dbg) Run:  kubectl --context addons-823400 create -f testdata\csi-hostpath-driver\snapshot.yaml
addons_test.go:516: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-823400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-823400 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:521: (dbg) Run:  kubectl --context addons-823400 delete pod task-pv-pod
addons_test.go:521: (dbg) Done: kubectl --context addons-823400 delete pod task-pv-pod: (1.5511044s)
addons_test.go:527: (dbg) Run:  kubectl --context addons-823400 delete pvc hpvc
addons_test.go:533: (dbg) Run:  kubectl --context addons-823400 create -f testdata\csi-hostpath-driver\pvc-restore.yaml
addons_test.go:538: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:543: (dbg) Run:  kubectl --context addons-823400 create -f testdata\csi-hostpath-driver\pv-pod-restore.yaml
addons_test.go:548: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [23fd80c3-351f-454b-b7da-671d963ea87f] Pending
helpers_test.go:344: "task-pv-pod-restore" [23fd80c3-351f-454b-b7da-671d963ea87f] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [23fd80c3-351f-454b-b7da-671d963ea87f] Running
addons_test.go:548: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 11.0059172s
addons_test.go:553: (dbg) Run:  kubectl --context addons-823400 delete pod task-pv-pod-restore
addons_test.go:553: (dbg) Done: kubectl --context addons-823400 delete pod task-pv-pod-restore: (1.7451361s)
addons_test.go:557: (dbg) Run:  kubectl --context addons-823400 delete pvc hpvc-restore
addons_test.go:561: (dbg) Run:  kubectl --context addons-823400 delete volumesnapshot new-snapshot-demo
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable volumesnapshots --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable volumesnapshots --alsologtostderr -v=1: (17.0726447s)
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable csi-hostpath-driver --alsologtostderr -v=1: (21.4059836s)
--- PASS: TestAddons/parallel/CSI (79.01s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (40.91s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:747: (dbg) Run:  out/minikube-windows-amd64.exe addons enable headlamp -p addons-823400 --alsologtostderr -v=1
addons_test.go:747: (dbg) Done: out/minikube-windows-amd64.exe addons enable headlamp -p addons-823400 --alsologtostderr -v=1: (16.0373913s)
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5d4b5d7bd6-cgsw7" [702a77a8-02b9-4c26-adc1-6a8dfd04f5a4] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-cgsw7" [702a77a8-02b9-4c26-adc1-6a8dfd04f5a4] Running / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5d4b5d7bd6-cgsw7" [702a77a8-02b9-4c26-adc1-6a8dfd04f5a4] Running
addons_test.go:752: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 17.0328473s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable headlamp --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable headlamp --alsologtostderr -v=1: (7.8403156s)
--- PASS: TestAddons/parallel/Headlamp (40.91s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (13.55s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-cc9755fc7-6v5x6" [d8feeb49-bea1-4384-b754-d626d3c9db47] Running
addons_test.go:779: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.0059606s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable cloud-spanner --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable cloud-spanner --alsologtostderr -v=1: (8.5303817s)
--- PASS: TestAddons/parallel/CloudSpanner (13.55s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (32.1s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:888: (dbg) Run:  kubectl --context addons-823400 apply -f testdata\storage-provisioner-rancher\pvc.yaml
addons_test.go:894: (dbg) Run:  kubectl --context addons-823400 apply -f testdata\storage-provisioner-rancher\pod.yaml
addons_test.go:898: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-823400 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [377d3b90-fc2c-454e-bcf1-900a27e40b9f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [377d3b90-fc2c-454e-bcf1-900a27e40b9f] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [377d3b90-fc2c-454e-bcf1-900a27e40b9f] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:901: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.0731518s
addons_test.go:906: (dbg) Run:  kubectl --context addons-823400 get pvc test-pvc -o=json
addons_test.go:915: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 ssh "cat /opt/local-path-provisioner/pvc-8fc7aff3-499b-46a0-9ce7-f727a1934182_default_test-pvc/file1"
addons_test.go:915: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 ssh "cat /opt/local-path-provisioner/pvc-8fc7aff3-499b-46a0-9ce7-f727a1934182_default_test-pvc/file1": (9.7056618s)
addons_test.go:927: (dbg) Run:  kubectl --context addons-823400 delete pod test-local-path
addons_test.go:931: (dbg) Run:  kubectl --context addons-823400 delete pvc test-pvc
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (8.7065035s)
--- PASS: TestAddons/parallel/LocalPath (32.10s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (21.93s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-j8mk2" [f023880d-e7a5-4d2a-9487-e01b8ceaf5ff] Running
addons_test.go:964: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.0051675s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable nvidia-device-plugin --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable nvidia-device-plugin --alsologtostderr -v=1: (15.9201661s)
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (21.93s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (28.06s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-575dd5996b-npvxb" [c834b777-36c1-4a39-868e-765ca498fc4f] Running
addons_test.go:986: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.0060389s
addons_test.go:992: (dbg) Run:  out/minikube-windows-amd64.exe -p addons-823400 addons disable yakd --alsologtostderr -v=1
addons_test.go:992: (dbg) Done: out/minikube-windows-amd64.exe -p addons-823400 addons disable yakd --alsologtostderr -v=1: (22.0455049s)
--- PASS: TestAddons/parallel/Yakd (28.06s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (53.48s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:170: (dbg) Run:  out/minikube-windows-amd64.exe stop -p addons-823400
addons_test.go:170: (dbg) Done: out/minikube-windows-amd64.exe stop -p addons-823400: (40.9480978s)
addons_test.go:174: (dbg) Run:  out/minikube-windows-amd64.exe addons enable dashboard -p addons-823400
addons_test.go:174: (dbg) Done: out/minikube-windows-amd64.exe addons enable dashboard -p addons-823400: (4.8968375s)
addons_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe addons disable dashboard -p addons-823400
addons_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe addons disable dashboard -p addons-823400: (4.9269673s)
addons_test.go:183: (dbg) Run:  out/minikube-windows-amd64.exe addons disable gvisor -p addons-823400
addons_test.go:183: (dbg) Done: out/minikube-windows-amd64.exe addons disable gvisor -p addons-823400: (2.7037533s)
--- PASS: TestAddons/StoppedEnableDisable (53.48s)

                                                
                                    
x
+
TestCertExpiration (937.62s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-287100 --memory=2048 --cert-expiration=3m --driver=hyperv
cert_options_test.go:123: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-287100 --memory=2048 --cert-expiration=3m --driver=hyperv: (5m53.051275s)
E0407 15:08:54.503716    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
cert_options_test.go:131: (dbg) Run:  out/minikube-windows-amd64.exe start -p cert-expiration-287100 --memory=2048 --cert-expiration=8760h --driver=hyperv
cert_options_test.go:131: (dbg) Done: out/minikube-windows-amd64.exe start -p cert-expiration-287100 --memory=2048 --cert-expiration=8760h --driver=hyperv: (5m57.4771256s)
helpers_test.go:175: Cleaning up "cert-expiration-287100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p cert-expiration-287100
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p cert-expiration-287100: (47.0951483s)
--- PASS: TestCertExpiration (937.62s)

                                                
                                    
x
+
TestDockerFlags (418.51s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe start -p docker-flags-422800 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv
docker_test.go:51: (dbg) Done: out/minikube-windows-amd64.exe start -p docker-flags-422800 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperv: (5m57.3095499s)
docker_test.go:56: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-422800 ssh "sudo systemctl show docker --property=Environment --no-pager"
E0407 15:11:55.792447    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 15:11:57.611060    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
docker_test.go:56: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-422800 ssh "sudo systemctl show docker --property=Environment --no-pager": (10.2177899s)
docker_test.go:67: (dbg) Run:  out/minikube-windows-amd64.exe -p docker-flags-422800 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
docker_test.go:67: (dbg) Done: out/minikube-windows-amd64.exe -p docker-flags-422800 ssh "sudo systemctl show docker --property=ExecStart --no-pager": (10.0573729s)
helpers_test.go:175: Cleaning up "docker-flags-422800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p docker-flags-422800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p docker-flags-422800: (40.9185303s)
--- PASS: TestDockerFlags (418.51s)

                                                
                                    
x
+
TestForceSystemdFlag (410.64s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-flag-817400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv
docker_test.go:91: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-flag-817400 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperv: (5m52.3473048s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-flag-817400 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-flag-817400 ssh "docker info --format {{.CgroupDriver}}": (10.4262458s)
helpers_test.go:175: Cleaning up "force-systemd-flag-817400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-flag-817400
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-flag-817400: (47.8698746s)
--- PASS: TestForceSystemdFlag (410.64s)

                                                
                                    
x
+
TestForceSystemdEnv (412.03s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe start -p force-systemd-env-498800 --memory=2048 --alsologtostderr -v=5 --driver=hyperv
docker_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe start -p force-systemd-env-498800 --memory=2048 --alsologtostderr -v=5 --driver=hyperv: (6m2.5740005s)
docker_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe -p force-systemd-env-498800 ssh "docker info --format {{.CgroupDriver}}"
docker_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe -p force-systemd-env-498800 ssh "docker info --format {{.CgroupDriver}}": (10.1946955s)
helpers_test.go:175: Cleaning up "force-systemd-env-498800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p force-systemd-env-498800
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p force-systemd-env-498800: (39.2623665s)
--- PASS: TestForceSystemdEnv (412.03s)

                                                
                                    
x
+
TestErrorSpam/start (17.04s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 start --dry-run: (5.6820386s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 start --dry-run
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 start --dry-run: (5.6840217s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 start --dry-run
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 start --dry-run: (5.6674104s)
--- PASS: TestErrorSpam/start (17.04s)

                                                
                                    
x
+
TestErrorSpam/status (36.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 status: (12.5542903s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 status
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 status: (11.9520193s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 status
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 status: (12.2753751s)
--- PASS: TestErrorSpam/status (36.79s)

                                                
                                    
x
+
TestErrorSpam/pause (23.92s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 pause: (8.4832769s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 pause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 pause: (7.8272537s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 pause
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 pause: (7.609033s)
--- PASS: TestErrorSpam/pause (23.92s)

                                                
                                    
x
+
TestErrorSpam/unpause (23.78s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 unpause
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 unpause: (7.9521753s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 unpause
E0407 12:36:55.734327    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:36:55.741485    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:36:55.753489    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:36:55.776106    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:36:55.817802    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:36:55.899530    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:36:56.061631    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:36:56.383840    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:36:57.025893    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:36:58.308688    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:37:00.870570    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 unpause: (8.0370812s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 unpause
E0407 12:37:05.994394    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 unpause: (7.7915161s)
--- PASS: TestErrorSpam/unpause (23.78s)

                                                
                                    
x
+
TestErrorSpam/stop (61.8s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 stop
E0407 12:37:16.236641    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:37:36.719903    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 stop: (39.2682724s)
error_spam_test.go:159: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 stop
error_spam_test.go:159: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 stop: (11.6805726s)
error_spam_test.go:182: (dbg) Run:  out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 stop
error_spam_test.go:182: (dbg) Done: out/minikube-windows-amd64.exe -p nospam-276800 --log_dir C:\Users\jenkins.minikube3\AppData\Local\Temp\nospam-276800 stop: (10.8460833s)
--- PASS: TestErrorSpam/stop (61.80s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1872: local sync path: C:\Users\jenkins.minikube3\minikube-integration\.minikube\files\etc\test\nested\copy\7728\hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.04s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (231.08s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2251: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-168700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv
E0407 12:39:39.605636    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:41:55.734237    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:2251: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-168700 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperv: (3m51.0764965s)
--- PASS: TestFunctional/serial/StartWithProxy (231.08s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (124.81s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
I0407 12:42:19.582878    7728 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
functional_test.go:676: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-168700 --alsologtostderr -v=8
E0407 12:42:23.449202    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:676: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-168700 --alsologtostderr -v=8: (2m4.8078922s)
functional_test.go:680: soft start took 2m4.8096477s for "functional-168700" cluster.
I0407 12:44:24.392017    7728 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/SoftStart (124.81s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.13s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:698: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.13s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:713: (dbg) Run:  kubectl --context functional-168700 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.23s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (26.26s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 cache add registry.k8s.io/pause:3.1
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 cache add registry.k8s.io/pause:3.1: (8.8679027s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 cache add registry.k8s.io/pause:3.3
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 cache add registry.k8s.io/pause:3.3: (8.7720565s)
functional_test.go:1066: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 cache add registry.k8s.io/pause:latest
functional_test.go:1066: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 cache add registry.k8s.io/pause:latest: (8.6196322s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (26.26s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (10.64s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1094: (dbg) Run:  docker build -t minikube-local-cache-test:functional-168700 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1314788264\001
functional_test.go:1094: (dbg) Done: docker build -t minikube-local-cache-test:functional-168700 C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialCacheCmdcacheadd_local1314788264\001: (1.800719s)
functional_test.go:1106: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 cache add minikube-local-cache-test:functional-168700
functional_test.go:1106: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 cache add minikube-local-cache-test:functional-168700: (8.4241121s)
functional_test.go:1111: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 cache delete minikube-local-cache-test:functional-168700
functional_test.go:1100: (dbg) Run:  docker rmi minikube-local-cache-test:functional-168700
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (10.64s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.29s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1119: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.29s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1127: (dbg) Run:  out/minikube-windows-amd64.exe cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.27s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1141: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh sudo crictl images
functional_test.go:1141: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh sudo crictl images: (9.439998s)
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (9.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (36.8s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1164: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1164: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh sudo docker rmi registry.k8s.io/pause:latest: (9.603802s)
functional_test.go:1170: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1170: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-168700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (9.4915303s)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1175: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 cache reload
functional_test.go:1175: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 cache reload: (8.2758933s)
functional_test.go:1180: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1180: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh sudo crictl inspecti registry.k8s.io/pause:latest: (9.4306938s)
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (36.80s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.54s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:3.1
functional_test.go:1189: (dbg) Run:  out/minikube-windows-amd64.exe cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.54s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:733: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 kubectl -- --context functional-168700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.52s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.89s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:758: (dbg) Run:  out\kubectl.exe --context functional-168700 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.89s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (126.67s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:774: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-168700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0407 12:46:55.735597    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
functional_test.go:774: (dbg) Done: out/minikube-windows-amd64.exe start -p functional-168700 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (2m6.6719859s)
functional_test.go:778: restart took 2m6.6725529s for "functional-168700" cluster.
I0407 12:47:58.081239    7728 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
--- PASS: TestFunctional/serial/ExtraConfig (126.67s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:827: (dbg) Run:  kubectl --context functional-168700 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:842: etcd phase: Running
functional_test.go:852: etcd status: Ready
functional_test.go:842: kube-apiserver phase: Running
functional_test.go:852: kube-apiserver status: Ready
functional_test.go:842: kube-controller-manager phase: Running
functional_test.go:852: kube-controller-manager status: Ready
functional_test.go:842: kube-scheduler phase: Running
functional_test.go:852: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.18s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (8.38s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1253: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 logs
functional_test.go:1253: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 logs: (8.3802746s)
--- PASS: TestFunctional/serial/LogsCmd (8.38s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (10.59s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1267: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3693614565\001\logs.txt
functional_test.go:1267: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 logs --file C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalserialLogsFileCmd3693614565\001\logs.txt: (10.5835576s)
--- PASS: TestFunctional/serial/LogsFileCmd (10.59s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (21.02s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2338: (dbg) Run:  kubectl --context functional-168700 apply -f testdata\invalidsvc.yaml
functional_test.go:2352: (dbg) Run:  out/minikube-windows-amd64.exe service invalid-svc -p functional-168700
functional_test.go:2352: (dbg) Non-zero exit: out/minikube-windows-amd64.exe service invalid-svc -p functional-168700: exit status 115 (16.7090827s)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL             |
	|-----------|-------------|-------------|----------------------------|
	| default   | invalid-svc |          80 | http://172.17.82.137:32164 |
	|-----------|-------------|-------------|----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube_service_5a553248039ac2ab6beea740c8d8ce1b809666c7_1.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2344: (dbg) Run:  kubectl --context functional-168700 delete -f testdata\invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (21.02s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (1.89s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-168700 config get cpus: exit status 14 (290.2962ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 config set cpus 2
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 config get cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 config unset cpus
functional_test.go:1216: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 config get cpus
functional_test.go:1216: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-168700 config get cpus: exit status 14 (262.5673ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (1.89s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (43.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:871: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 status
functional_test.go:871: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 status: (13.4217247s)
functional_test.go:877: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:877: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}: (15.1688256s)
functional_test.go:889: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 status -o json
functional_test.go:889: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 status -o json: (14.7109807s)
--- PASS: TestFunctional/parallel/StatusCmd (43.30s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (27.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1646: (dbg) Run:  kubectl --context functional-168700 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1652: (dbg) Run:  kubectl --context functional-168700 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-58f9cf68d8-g5ccm" [a0af6b30-8632-4124-80e4-7a98f5c79a25] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-58f9cf68d8-g5ccm" [a0af6b30-8632-4124-80e4-7a98f5c79a25] Running
functional_test.go:1657: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.0053275s
functional_test.go:1666: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 service hello-node-connect --url
functional_test.go:1666: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 service hello-node-connect --url: (18.9231338s)
functional_test.go:1672: found endpoint for hello-node-connect: http://172.17.82.137:30907
functional_test.go:1692: http://172.17.82.137:30907: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-58f9cf68d8-g5ccm

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://172.17.82.137:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=172.17.82.137:30907
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (27.34s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1707: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 addons list
functional_test.go:1719: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.62s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [8f7db86b-d108-41fa-be6d-d842c75b5b81] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.0065591s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-168700 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-168700 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-168700 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-168700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [ad24d21b-0173-44a7-89c8-af6ca4930693] Pending
helpers_test.go:344: "sp-pod" [ad24d21b-0173-44a7-89c8-af6ca4930693] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [ad24d21b-0173-44a7-89c8-af6ca4930693] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 27.0089032s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-168700 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-168700 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-168700 delete -f testdata/storage-provisioner/pod.yaml: (1.7142686s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-168700 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [a508150a-6cd6-4810-9fbf-4c7786e63bba] Pending
helpers_test.go:344: "sp-pod" [a508150a-6cd6-4810-9fbf-4c7786e63bba] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [a508150a-6cd6-4810-9fbf-4c7786e63bba] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.0059035s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-168700 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.06s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (22.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1742: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "echo hello"
functional_test.go:1742: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh "echo hello": (11.0903858s)
functional_test.go:1759: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "cat /etc/hostname"
functional_test.go:1759: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh "cat /etc/hostname": (11.4236681s)
--- PASS: TestFunctional/parallel/SSHCmd (22.51s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (61.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 cp testdata\cp-test.txt /home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 cp testdata\cp-test.txt /home/docker/cp-test.txt: (9.2217656s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh -n functional-168700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh -n functional-168700 "sudo cat /home/docker/cp-test.txt": (11.2695642s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 cp functional-168700:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd398901665\001\cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 cp functional-168700:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestFunctionalparallelCpCmd398901665\001\cp-test.txt: (10.3551792s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh -n functional-168700 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh -n functional-168700 "sudo cat /home/docker/cp-test.txt": (11.3390132s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 cp testdata\cp-test.txt /tmp/does/not/exist/cp-test.txt: (8.5671169s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh -n functional-168700 "sudo cat /tmp/does/not/exist/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh -n functional-168700 "sudo cat /tmp/does/not/exist/cp-test.txt": (11.0358975s)
--- PASS: TestFunctional/parallel/CpCmd (61.80s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (58.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1810: (dbg) Run:  kubectl --context functional-168700 replace --force -f testdata\mysql.yaml
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-58ccfd96bb-nf6g8" [ade239f2-7aa0-43a2-88bd-dab179490e46] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-58ccfd96bb-nf6g8" [ade239f2-7aa0-43a2-88bd-dab179490e46] Running
functional_test.go:1816: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 45.0046161s
functional_test.go:1824: (dbg) Run:  kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;": exit status 1 (277.5332ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 12:51:43.099825    7728 retry.go:31] will retry after 623.448896ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;": exit status 1 (247.2271ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 12:51:43.980002    7728 retry.go:31] will retry after 761.688296ms: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;": exit status 1 (271.9062ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 12:51:45.025483    7728 retry.go:31] will retry after 2.120840498s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;": exit status 1 (309.0298ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 12:51:47.466096    7728 retry.go:31] will retry after 2.402821477s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;"
functional_test.go:1824: (dbg) Non-zero exit: kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;": exit status 1 (389.4925ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
I0407 12:51:50.270638    7728 retry.go:31] will retry after 4.916963893s: exit status 1
functional_test.go:1824: (dbg) Run:  kubectl --context functional-168700 exec mysql-58ccfd96bb-nf6g8 -- mysql -ppassword -e "show databases;"
E0407 12:51:55.736276    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/MySQL (58.16s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (10.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1946: Checking for existence of /etc/test/nested/copy/7728/hosts within VM
functional_test.go:1948: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/test/nested/copy/7728/hosts"
functional_test.go:1948: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/test/nested/copy/7728/hosts": (10.5350259s)
functional_test.go:1953: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (10.53s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (67.09s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1989: Checking for existence of /etc/ssl/certs/7728.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/ssl/certs/7728.pem"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/ssl/certs/7728.pem": (11.78818s)
functional_test.go:1989: Checking for existence of /usr/share/ca-certificates/7728.pem within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /usr/share/ca-certificates/7728.pem"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /usr/share/ca-certificates/7728.pem": (11.2176886s)
functional_test.go:1989: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1990: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1990: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/ssl/certs/51391683.0": (10.2700175s)
functional_test.go:2016: Checking for existence of /etc/ssl/certs/77282.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/ssl/certs/77282.pem"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/ssl/certs/77282.pem": (11.0635478s)
functional_test.go:2016: Checking for existence of /usr/share/ca-certificates/77282.pem within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /usr/share/ca-certificates/77282.pem"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /usr/share/ca-certificates/77282.pem": (11.0954058s)
functional_test.go:2016: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:2017: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
functional_test.go:2017: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0": (11.6511051s)
--- PASS: TestFunctional/parallel/CertSync (67.09s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:236: (dbg) Run:  kubectl --context functional-168700 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (12.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2044: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo systemctl is-active crio"
functional_test.go:2044: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-168700 ssh "sudo systemctl is-active crio": exit status 1 (12.007115s)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (12.01s)

                                                
                                    
x
+
TestFunctional/parallel/License (1.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2305: (dbg) Run:  out/minikube-windows-amd64.exe license
functional_test.go:2305: (dbg) Done: out/minikube-windows-amd64.exe license: (1.7663507s)
--- PASS: TestFunctional/parallel/License (1.79s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/powershell (47.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/powershell
functional_test.go:516: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-168700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-168700"
functional_test.go:516: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-168700 docker-env | Invoke-Expression ; out/minikube-windows-amd64.exe status -p functional-168700": (31.0086515s)
functional_test.go:539: (dbg) Run:  powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-168700 docker-env | Invoke-Expression ; docker images"
functional_test.go:539: (dbg) Done: powershell.exe -NoProfile -NonInteractive "out/minikube-windows-amd64.exe -p functional-168700 docker-env | Invoke-Expression ; docker images": (16.1085213s)
--- PASS: TestFunctional/parallel/DockerEnv/powershell (47.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (2.98s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 update-context --alsologtostderr -v=2: (2.9805581s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (2.98s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 update-context --alsologtostderr -v=2: (2.6445471s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (2.65s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (2.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2136: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 update-context --alsologtostderr -v=2
functional_test.go:2136: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 update-context --alsologtostderr -v=2: (2.5692814s)
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (2.57s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2273: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 version --short
--- PASS: TestFunctional/parallel/Version/short (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (8.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2287: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 version -o=json --components
functional_test.go:2287: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 version -o=json --components: (8.2259331s)
--- PASS: TestFunctional/parallel/Version/components (8.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (17.51s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1456: (dbg) Run:  kubectl --context functional-168700 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1462: (dbg) Run:  kubectl --context functional-168700 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-fcfd88b6f-4272g" [1271422d-a131-4e6b-aa35-1fc1d510c819] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-fcfd88b6f-4272g" [1271422d-a131-4e6b-aa35-1fc1d510c819] Running
functional_test.go:1467: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 17.010931s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (17.51s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (14.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1476: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 service list
functional_test.go:1476: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 service list: (14.2702383s)
--- PASS: TestFunctional/parallel/ServiceCmd/List (14.27s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (14.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1506: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 service list -o json
functional_test.go:1506: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 service list -o json: (14.240158s)
functional_test.go:1511: Took "14.2405358s" to run "out/minikube-windows-amd64.exe -p functional-168700 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (14.24s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-168700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-168700 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-168700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 8968: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 4688: TerminateProcess: Access is denied.
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-168700 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (9.46s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-windows-amd64.exe -p functional-168700 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-168700 apply -f testdata\testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [3a70ca1f-487b-48ae-a1a2-920a12a58f2b] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [3a70ca1f-487b-48ae-a1a2-920a12a58f2b] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 13.004768s
I0407 12:50:03.253770    7728 kapi.go:150] Service nginx-svc in namespace default found.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (13.73s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-windows-amd64.exe -p functional-168700 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 13828: OpenProcess: The parameter is incorrect.
helpers_test.go:508: unable to kill pid 5340: TerminateProcess: Access is denied.
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (14.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1287: (dbg) Run:  out/minikube-windows-amd64.exe profile lis
functional_test.go:1292: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
functional_test.go:1292: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (13.7526374s)
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (14.10s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (15.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1327: (dbg) Run:  out/minikube-windows-amd64.exe profile list
functional_test.go:1327: (dbg) Done: out/minikube-windows-amd64.exe profile list: (14.7463037s)
functional_test.go:1332: Took "14.7466405s" to run "out/minikube-windows-amd64.exe profile list"
functional_test.go:1341: (dbg) Run:  out/minikube-windows-amd64.exe profile list -l
functional_test.go:1346: Took "303.719ms" to run "out/minikube-windows-amd64.exe profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (15.05s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (8.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls --format short --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls --format short --alsologtostderr: (8.1462463s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-168700 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.10
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.32.2
registry.k8s.io/kube-proxy:v1.32.2
registry.k8s.io/kube-controller-manager:v1.32.2
registry.k8s.io/kube-apiserver:v1.32.2
registry.k8s.io/etcd:3.5.16-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.3
gcr.io/k8s-minikube/storage-provisioner:v5
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-168700
docker.io/kicbase/echo-server:functional-168700
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-168700 image ls --format short --alsologtostderr:
I0407 12:52:19.738657   14088 out.go:345] Setting OutFile to fd 1744 ...
I0407 12:52:19.853791   14088 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:19.853791   14088 out.go:358] Setting ErrFile to fd 1660...
I0407 12:52:19.854788   14088 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:19.872393   14088 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:19.872393   14088 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:19.874099   14088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:22.408959   14088 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:22.408959   14088 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:22.421159   14088 ssh_runner.go:195] Run: systemctl --version
I0407 12:52:22.421159   14088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:24.763236   14088 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:24.763450   14088 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:24.763450   14088 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-168700 ).networkadapters[0]).ipaddresses[0]
I0407 12:52:27.544366   14088 main.go:141] libmachine: [stdout =====>] : 172.17.82.137

                                                
                                                
I0407 12:52:27.544366   14088 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:27.544508   14088 sshutil.go:53] new ssh client: &{IP:172.17.82.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-168700\id_rsa Username:docker}
I0407 12:52:27.647290   14088 ssh_runner.go:235] Completed: systemctl --version: (5.2261127s)
I0407 12:52:27.661068   14088 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (8.15s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (8s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls --format table --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls --format table --alsologtostderr: (7.9951588s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-168700 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.32.2           | 85b7a174738ba | 97MB   |
| registry.k8s.io/kube-proxy                  | v1.32.2           | f1332858868e1 | 94MB   |
| docker.io/library/nginx                     | alpine            | 1ff4bb4faebcf | 47.9MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | 53a18edff8091 | 192MB  |
| docker.io/kicbase/echo-server               | functional-168700 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| docker.io/library/minikube-local-cache-test | functional-168700 | 512f8ea8bcd97 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.32.2           | d8e673e7c9983 | 69.6MB |
| registry.k8s.io/etcd                        | 3.5.16-0          | a9e7e6b294baf | 150MB  |
| registry.k8s.io/pause                       | 3.10              | 873ed75102791 | 736kB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/kube-controller-manager     | v1.32.2           | b6a454c5a800d | 89.7MB |
| registry.k8s.io/coredns/coredns             | v1.11.3           | c69fa2e9cbf5f | 61.8MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-168700 image ls --format table --alsologtostderr:
I0407 12:52:27.876529    4388 out.go:345] Setting OutFile to fd 1304 ...
I0407 12:52:27.982852    4388 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:27.982953    4388 out.go:358] Setting ErrFile to fd 1160...
I0407 12:52:27.982953    4388 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:28.002746    4388 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:28.003293    4388 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:28.006071    4388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:30.353437    4388 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:30.353670    4388 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:30.367254    4388 ssh_runner.go:195] Run: systemctl --version
I0407 12:52:30.367846    4388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:32.728609    4388 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:32.729168    4388 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:32.729232    4388 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-168700 ).networkadapters[0]).ipaddresses[0]
I0407 12:52:35.551849    4388 main.go:141] libmachine: [stdout =====>] : 172.17.82.137

                                                
                                                
I0407 12:52:35.551849    4388 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:35.552585    4388 sshutil.go:53] new ssh client: &{IP:172.17.82.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-168700\id_rsa Username:docker}
I0407 12:52:35.673804    4388 ssh_runner.go:235] Completed: systemctl --version: (5.3059513s)
I0407 12:52:35.682882    4388 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (8.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (7.96s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls --format json --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls --format json --alsologtostderr: (7.9590641s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-168700 image ls --format json --alsologtostderr:
[{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"47900000"},{"id":"53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"192000000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-168700"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe5
0ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"512f8ea8bcd97727f93bbecd953b0bc5404527c78a0152e057af2856f5ea1e1f","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-168700"],"size":"30"},{"id":"d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.32.2"],"size":"69600000"},{"id":"85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.32.2"],"size":"97000000"},{"id":"b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.32.2"],"size":"89700000"},{"id":"c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6","repoDigests":[],"repoTags":["regis
try.k8s.io/coredns/coredns:v1.11.3"],"size":"61800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.32.2"],"size":"94000000"},{"id":"a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.16-0"],"size":"150000000"},{"id":"873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.10"],"size":"736000"}]
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-168700 image ls --format json --alsologtostderr:
I0407 12:52:27.767873   12504 out.go:345] Setting OutFile to fd 984 ...
I0407 12:52:27.843867   12504 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:27.843867   12504 out.go:358] Setting ErrFile to fd 1832...
I0407 12:52:27.843867   12504 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:27.865895   12504 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:27.866910   12504 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:27.867585   12504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:30.221810   12504 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:30.222807   12504 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:30.235712   12504 ssh_runner.go:195] Run: systemctl --version
I0407 12:52:30.235712   12504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:32.573394   12504 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:32.573394   12504 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:32.573394   12504 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-168700 ).networkadapters[0]).ipaddresses[0]
I0407 12:52:35.391051   12504 main.go:141] libmachine: [stdout =====>] : 172.17.82.137

                                                
                                                
I0407 12:52:35.391571   12504 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:35.391788   12504 sshutil.go:53] new ssh client: &{IP:172.17.82.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-168700\id_rsa Username:docker}
I0407 12:52:35.517459   12504 ssh_runner.go:235] Completed: systemctl --version: (5.2817287s)
I0407 12:52:35.527822   12504 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (7.96s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (8.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:278: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls --format yaml --alsologtostderr
functional_test.go:278: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls --format yaml --alsologtostderr: (8.0338827s)
functional_test.go:283: (dbg) Stdout: out/minikube-windows-amd64.exe -p functional-168700 image ls --format yaml --alsologtostderr:
- id: d8e673e7c9983f1f53569a9d2ba786c8abb42e3f744f77dc97a595f3caf9435d
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.32.2
size: "69600000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 512f8ea8bcd97727f93bbecd953b0bc5404527c78a0152e057af2856f5ea1e1f
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-168700
size: "30"
- id: 85b7a174738baecbc53029b7913cd430a2060e0cbdb5f56c7957d32ff7f241ef
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.32.2
size: "97000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: b6a454c5a800d201daacead6ff195ec6049fe6dc086621b0670bca912efaf389
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.32.2
size: "89700000"
- id: f1332858868e1c6a905123b21e2e322ab45a5b99a3532e68ff49a87c2266ebc5
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.32.2
size: "94000000"
- id: c69fa2e9cbf5f42dc48af631e956d3f95724c13f91596bc567591790e5e36db6
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.3
size: "61800000"
- id: 873ed75102791e5b0b8a7fcd41606c92fcec98d56d05ead4ac5131650004c136
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.10
size: "736000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 1ff4bb4faebcfb1f7e01144fa9904a570ab9bab88694457855feb6c6bba3fa07
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "47900000"
- id: 53a18edff8091d5faff1e42b4d885bc5f0f897873b0b8f0ace236cd5930819b0
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "192000000"
- id: a9e7e6b294baf1695fccb862d956c5d3ad8510e1e4ca1535f35dc09f247abbfc
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.16-0
size: "150000000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-168700
size: "4940000"

                                                
                                                
functional_test.go:286: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-168700 image ls --format yaml --alsologtostderr:
I0407 12:52:19.736825    5500 out.go:345] Setting OutFile to fd 1716 ...
I0407 12:52:19.829803    5500 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:19.829803    5500 out.go:358] Setting ErrFile to fd 1416...
I0407 12:52:19.829803    5500 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:19.843828    5500 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:19.844794    5500 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:19.845792    5500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:22.330883    5500 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:22.330883    5500 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:22.350066    5500 ssh_runner.go:195] Run: systemctl --version
I0407 12:52:22.350066    5500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:24.694511    5500 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:24.694511    5500 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:24.694511    5500 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-168700 ).networkadapters[0]).ipaddresses[0]
I0407 12:52:27.455969    5500 main.go:141] libmachine: [stdout =====>] : 172.17.82.137

                                                
                                                
I0407 12:52:27.456459    5500 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:27.456459    5500 sshutil.go:53] new ssh client: &{IP:172.17.82.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-168700\id_rsa Username:docker}
I0407 12:52:27.563748    5500 ssh_runner.go:235] Completed: systemctl --version: (5.2136628s)
I0407 12:52:27.574114    5500 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (8.03s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (28.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:325: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 ssh pgrep buildkitd
functional_test.go:325: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p functional-168700 ssh pgrep buildkitd: exit status 1 (10.2938669s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:332: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image build -t localhost/my-image:functional-168700 testdata\build --alsologtostderr
functional_test.go:332: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image build -t localhost/my-image:functional-168700 testdata\build --alsologtostderr: (11.0200138s)
functional_test.go:340: (dbg) Stderr: out/minikube-windows-amd64.exe -p functional-168700 image build -t localhost/my-image:functional-168700 testdata\build --alsologtostderr:
I0407 12:52:30.011761   10188 out.go:345] Setting OutFile to fd 1220 ...
I0407 12:52:30.141528   10188 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:30.141621   10188 out.go:358] Setting ErrFile to fd 1052...
I0407 12:52:30.141621   10188 out.go:392] TERM=,COLORTERM=, which probably does not support color
I0407 12:52:30.162545   10188 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:30.191058   10188 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
I0407 12:52:30.192032   10188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:32.538582   10188 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:32.539314   10188 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:32.551508   10188 ssh_runner.go:195] Run: systemctl --version
I0407 12:52:32.551508   10188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM functional-168700 ).state
I0407 12:52:34.942004   10188 main.go:141] libmachine: [stdout =====>] : Running

                                                
                                                
I0407 12:52:34.944066   10188 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:34.944117   10188 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM functional-168700 ).networkadapters[0]).ipaddresses[0]
I0407 12:52:37.528598   10188 main.go:141] libmachine: [stdout =====>] : 172.17.82.137

                                                
                                                
I0407 12:52:37.528598   10188 main.go:141] libmachine: [stderr =====>] : 
I0407 12:52:37.529921   10188 sshutil.go:53] new ssh client: &{IP:172.17.82.137 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\functional-168700\id_rsa Username:docker}
I0407 12:52:37.629261   10188 ssh_runner.go:235] Completed: systemctl --version: (5.0777344s)
I0407 12:52:37.629261   10188 build_images.go:161] Building image from path: C:\Users\jenkins.minikube3\AppData\Local\Temp\build.3530557348.tar
I0407 12:52:37.640824   10188 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0407 12:52:37.669007   10188 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3530557348.tar
I0407 12:52:37.675034   10188 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3530557348.tar: stat -c "%s %y" /var/lib/minikube/build/build.3530557348.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3530557348.tar': No such file or directory
I0407 12:52:37.675154   10188 ssh_runner.go:362] scp C:\Users\jenkins.minikube3\AppData\Local\Temp\build.3530557348.tar --> /var/lib/minikube/build/build.3530557348.tar (3072 bytes)
I0407 12:52:37.730669   10188 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3530557348
I0407 12:52:37.759571   10188 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3530557348 -xf /var/lib/minikube/build/build.3530557348.tar
I0407 12:52:37.779485   10188 docker.go:360] Building image: /var/lib/minikube/build/build.3530557348
I0407 12:52:37.788493   10188 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-168700 /var/lib/minikube/build/build.3530557348
#0 building with "default" instance using docker driver

                                                
                                                
#1 [internal] load build definition from Dockerfile
#1 transferring dockerfile: 97B done
#1 DONE 0.1s

                                                
                                                
#2 [internal] load metadata for gcr.io/k8s-minikube/busybox:latest
#2 DONE 0.8s

                                                
                                                
#3 [internal] load .dockerignore
#3 transferring context: 2B done
#3 DONE 0.1s

                                                
                                                
#4 [internal] load build context
#4 transferring context: 62B done
#4 DONE 0.1s

                                                
                                                
#5 [1/3] FROM gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
#5 resolve gcr.io/k8s-minikube/busybox:latest@sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 0.1s done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0B / 772.79kB 0.2s
#5 sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b 770B / 770B done
#5 sha256:62ffc2ed7554e4c6d360bce40bbcf196573dd27c4ce080641a2c59867e732dee 527B / 527B done
#5 sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a 1.46kB / 1.46kB done
#5 sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 772.79kB / 772.79kB 0.3s done
#5 extracting sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa 0.2s done
#5 DONE 0.8s

                                                
                                                
#6 [2/3] RUN true
#6 DONE 0.3s

                                                
                                                
#7 [3/3] ADD content.txt /
#7 DONE 0.2s

                                                
                                                
#8 exporting to image
#8 exporting layers 0.1s done
#8 writing image sha256:f65dca6327fdf367c3e775a7e3c6eafa97a105bf6fb2f173eafd91b50a00cab5
#8 writing image sha256:f65dca6327fdf367c3e775a7e3c6eafa97a105bf6fb2f173eafd91b50a00cab5 done
#8 naming to localhost/my-image:functional-168700 0.0s done
#8 DONE 0.2s
I0407 12:52:40.829164   10188 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-168700 /var/lib/minikube/build/build.3530557348: (3.0405714s)
I0407 12:52:40.840659   10188 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3530557348
I0407 12:52:40.871896   10188 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3530557348.tar
I0407 12:52:40.890720   10188 build_images.go:217] Built localhost/my-image:functional-168700 from C:\Users\jenkins.minikube3\AppData\Local\Temp\build.3530557348.tar
I0407 12:52:40.890928   10188 build_images.go:133] succeeded building to: functional-168700
I0407 12:52:40.890928   10188 build_images.go:134] failed building to: 
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls: (7.2476236s)
E0407 12:53:18.813585    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (28.56s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:359: (dbg) Run:  docker pull kicbase/echo-server:1.0
functional_test.go:359: (dbg) Done: docker pull kicbase/echo-server:1.0: (2.2207914s)
functional_test.go:364: (dbg) Run:  docker tag kicbase/echo-server:1.0 kicbase/echo-server:functional-168700
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.38s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:372: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image load --daemon kicbase/echo-server:functional-168700 --alsologtostderr
functional_test.go:372: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image load --daemon kicbase/echo-server:functional-168700 --alsologtostderr: (10.4953015s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls: (8.6767088s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (19.17s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (14.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1378: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json
functional_test.go:1378: (dbg) Done: out/minikube-windows-amd64.exe profile list -o json: (14.3236456s)
functional_test.go:1383: Took "14.3242728s" to run "out/minikube-windows-amd64.exe profile list -o json"
functional_test.go:1391: (dbg) Run:  out/minikube-windows-amd64.exe profile list -o json --light
functional_test.go:1396: Took "254.4764ms" to run "out/minikube-windows-amd64.exe profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (14.58s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (16.99s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:382: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image load --daemon kicbase/echo-server:functional-168700 --alsologtostderr
functional_test.go:382: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image load --daemon kicbase/echo-server:functional-168700 --alsologtostderr: (9.1601943s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls: (7.8308044s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (16.99s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:252: (dbg) Run:  docker pull kicbase/echo-server:latest
functional_test.go:257: (dbg) Run:  docker tag kicbase/echo-server:latest kicbase/echo-server:functional-168700
functional_test.go:262: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image load --daemon kicbase/echo-server:functional-168700 --alsologtostderr
functional_test.go:262: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image load --daemon kicbase/echo-server:functional-168700 --alsologtostderr: (8.0026024s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls: (7.5192572s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (16.53s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:397: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image save kicbase/echo-server:functional-168700 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:397: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image save kicbase/echo-server:functional-168700 C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (7.6643664s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (7.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (14.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:409: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image rm kicbase/echo-server:functional-168700 --alsologtostderr
functional_test.go:409: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image rm kicbase/echo-server:functional-168700 --alsologtostderr: (7.4231919s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls: (7.3646938s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (14.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (14.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:426: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr
functional_test.go:426: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image load C:\jenkins\workspace\Hyper-V_Windows_integration\echo-server-save.tar --alsologtostderr: (7.6576445s)
functional_test.go:468: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image ls
functional_test.go:468: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image ls: (7.2143028s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (14.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.75s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:436: (dbg) Run:  docker rmi kicbase/echo-server:functional-168700
functional_test.go:441: (dbg) Run:  out/minikube-windows-amd64.exe -p functional-168700 image save --daemon kicbase/echo-server:functional-168700 --alsologtostderr
functional_test.go:441: (dbg) Done: out/minikube-windows-amd64.exe -p functional-168700 image save --daemon kicbase/echo-server:functional-168700 --alsologtostderr: (7.5532697s)
functional_test.go:449: (dbg) Run:  docker image inspect kicbase/echo-server:functional-168700
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (7.75s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.22s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:1.0
functional_test.go:207: (dbg) Run:  docker rmi -f kicbase/echo-server:functional-168700
--- PASS: TestFunctional/delete_echo-server_images (0.22s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.08s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:215: (dbg) Run:  docker rmi -f localhost/my-image:functional-168700
--- PASS: TestFunctional/delete_my-image_image (0.08s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:223: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-168700
--- PASS: TestFunctional/delete_minikube_cached_images (0.09s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (706.64s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-windows-amd64.exe start -p ha-573100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv
E0407 12:58:54.450783    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:54.458516    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:54.470309    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:54.492667    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:54.535511    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:54.618021    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:54.780322    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:55.103520    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:55.745806    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:57.028755    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:58:59.591605    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:59:04.714447    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:59:14.957387    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 12:59:35.439758    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:00:16.402649    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:01:38.324621    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:01:55.738230    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:03:54.452794    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:04:22.167996    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:06:55.741653    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:101: (dbg) Done: out/minikube-windows-amd64.exe start -p ha-573100 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperv: (11m9.6759792s)
ha_test.go:107: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 status -v=7 --alsologtostderr
ha_test.go:107: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 status -v=7 --alsologtostderr: (36.9621907s)
--- PASS: TestMultiControlPlane/serial/StartCluster (706.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (15.01s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-573100 -- rollout status deployment/busybox: (5.7270936s)
ha_test.go:140: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-gtkbk -- nslookup kubernetes.io
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-gtkbk -- nslookup kubernetes.io: (2.0581262s)
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-szx9k -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-tj2cw -- nslookup kubernetes.io
E0407 13:08:54.454284    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:171: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-tj2cw -- nslookup kubernetes.io: (1.8041763s)
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-gtkbk -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-szx9k -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-tj2cw -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-gtkbk -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-szx9k -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p ha-573100 -- exec busybox-58667487b6-tj2cw -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (15.01s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (261.74s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-windows-amd64.exe node add -p ha-573100 -v=7 --alsologtostderr
E0407 13:11:55.741412    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:228: (dbg) Done: out/minikube-windows-amd64.exe node add -p ha-573100 -v=7 --alsologtostderr: (3m33.2648176s)
ha_test.go:234: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 status -v=7 --alsologtostderr
E0407 13:13:54.455853    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:234: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 status -v=7 --alsologtostderr: (48.4760802s)
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (261.74s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-573100 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.19s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (48.87s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
E0407 13:15:17.533844    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
ha_test.go:281: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (48.8671587s)
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (48.87s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (641.37s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:328: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 status --output json -v=7 --alsologtostderr
ha_test.go:328: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 status --output json -v=7 --alsologtostderr: (48.3974305s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp testdata\cp-test.txt ha-573100:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp testdata\cp-test.txt ha-573100:/home/docker/cp-test.txt: (9.6838363s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt": (9.602526s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100.txt: (9.7282598s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt": (9.7132844s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100:/home/docker/cp-test.txt ha-573100-m02:/home/docker/cp-test_ha-573100_ha-573100-m02.txt
E0407 13:16:55.742913    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100:/home/docker/cp-test.txt ha-573100-m02:/home/docker/cp-test_ha-573100_ha-573100-m02.txt: (17.1653169s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt": (9.6418557s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test_ha-573100_ha-573100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test_ha-573100_ha-573100-m02.txt": (9.7003041s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100:/home/docker/cp-test.txt ha-573100-m03:/home/docker/cp-test_ha-573100_ha-573100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100:/home/docker/cp-test.txt ha-573100-m03:/home/docker/cp-test_ha-573100_ha-573100-m03.txt: (16.9339126s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt": (9.6210053s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test_ha-573100_ha-573100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test_ha-573100_ha-573100-m03.txt": (9.6626099s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100:/home/docker/cp-test.txt ha-573100-m04:/home/docker/cp-test_ha-573100_ha-573100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100:/home/docker/cp-test.txt ha-573100-m04:/home/docker/cp-test_ha-573100_ha-573100-m04.txt: (17.2014729s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test.txt": (9.899669s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test_ha-573100_ha-573100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test_ha-573100_ha-573100-m04.txt": (10.0399936s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp testdata\cp-test.txt ha-573100-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp testdata\cp-test.txt ha-573100-m02:/home/docker/cp-test.txt: (9.8546802s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt"
E0407 13:18:54.457412    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt": (9.9416919s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100-m02.txt: (9.771479s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt": (9.683874s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m02:/home/docker/cp-test.txt ha-573100:/home/docker/cp-test_ha-573100-m02_ha-573100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m02:/home/docker/cp-test.txt ha-573100:/home/docker/cp-test_ha-573100-m02_ha-573100.txt: (16.9521981s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt": (9.7837471s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test_ha-573100-m02_ha-573100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test_ha-573100-m02_ha-573100.txt": (9.7711002s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m02:/home/docker/cp-test.txt ha-573100-m03:/home/docker/cp-test_ha-573100-m02_ha-573100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m02:/home/docker/cp-test.txt ha-573100-m03:/home/docker/cp-test_ha-573100-m02_ha-573100-m03.txt: (16.9841077s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt": (9.7305187s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test_ha-573100-m02_ha-573100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test_ha-573100-m02_ha-573100-m03.txt": (9.7908738s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m02:/home/docker/cp-test.txt ha-573100-m04:/home/docker/cp-test_ha-573100-m02_ha-573100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m02:/home/docker/cp-test.txt ha-573100-m04:/home/docker/cp-test_ha-573100-m02_ha-573100-m04.txt: (17.1137037s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test.txt": (9.8496676s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test_ha-573100-m02_ha-573100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test_ha-573100-m02_ha-573100-m04.txt": (9.5997233s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp testdata\cp-test.txt ha-573100-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp testdata\cp-test.txt ha-573100-m03:/home/docker/cp-test.txt: (9.680722s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt": (9.618602s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100-m03.txt: (9.6489061s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt": (9.7522469s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt ha-573100:/home/docker/cp-test_ha-573100-m03_ha-573100.txt
E0407 13:21:55.744441    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt ha-573100:/home/docker/cp-test_ha-573100-m03_ha-573100.txt: (16.8962668s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt": (9.7229022s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test_ha-573100-m03_ha-573100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test_ha-573100-m03_ha-573100.txt": (9.770404s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt ha-573100-m02:/home/docker/cp-test_ha-573100-m03_ha-573100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt ha-573100-m02:/home/docker/cp-test_ha-573100-m03_ha-573100-m02.txt: (16.9516022s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt": (9.7689518s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test_ha-573100-m03_ha-573100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test_ha-573100-m03_ha-573100-m02.txt": (9.6280201s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt ha-573100-m04:/home/docker/cp-test_ha-573100-m03_ha-573100-m04.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m03:/home/docker/cp-test.txt ha-573100-m04:/home/docker/cp-test_ha-573100-m03_ha-573100-m04.txt: (16.9149846s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test.txt": (9.6775791s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test_ha-573100-m03_ha-573100-m04.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test_ha-573100-m03_ha-573100-m04.txt": (9.6385752s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp testdata\cp-test.txt ha-573100-m04:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp testdata\cp-test.txt ha-573100-m04:/home/docker/cp-test.txt: (9.7112876s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt": (9.7457221s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100-m04.txt
E0407 13:23:54.457975    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiControlPlaneserialCopyFile384422757\001\cp-test_ha-573100-m04.txt: (9.6119173s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt": (9.6335733s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt ha-573100:/home/docker/cp-test_ha-573100-m04_ha-573100.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt ha-573100:/home/docker/cp-test_ha-573100-m04_ha-573100.txt: (16.945332s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt": (9.8390223s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test_ha-573100-m04_ha-573100.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100 "sudo cat /home/docker/cp-test_ha-573100-m04_ha-573100.txt": (9.8287292s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt ha-573100-m02:/home/docker/cp-test_ha-573100-m04_ha-573100-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt ha-573100-m02:/home/docker/cp-test_ha-573100-m04_ha-573100-m02.txt: (16.9279653s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt": (9.6299597s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test_ha-573100-m04_ha-573100-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m02 "sudo cat /home/docker/cp-test_ha-573100-m04_ha-573100-m02.txt": (9.6129429s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt ha-573100-m03:/home/docker/cp-test_ha-573100-m04_ha-573100-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 cp ha-573100-m04:/home/docker/cp-test.txt ha-573100-m03:/home/docker/cp-test_ha-573100-m04_ha-573100-m03.txt: (16.9537218s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m04 "sudo cat /home/docker/cp-test.txt": (9.7416726s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test_ha-573100-m04_ha-573100-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p ha-573100 ssh -n ha-573100-m03 "sudo cat /home/docker/cp-test_ha-573100-m04_ha-573100-m03.txt": (9.6653797s)
--- PASS: TestMultiControlPlane/serial/CopyFile (641.37s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (192.61s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-windows-amd64.exe start -p image-209100 --driver=hyperv
E0407 13:31:55.749001    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:31:57.541616    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:69: (dbg) Done: out/minikube-windows-amd64.exe start -p image-209100 --driver=hyperv: (3m12.6078587s)
--- PASS: TestImageBuild/serial/Setup (192.61s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (10.45s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-209100
image_test.go:78: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal -p image-209100: (10.4530372s)
--- PASS: TestImageBuild/serial/NormalBuild (10.45s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (8.76s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-209100
image_test.go:99: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-209100: (8.7610138s)
--- PASS: TestImageBuild/serial/BuildWithBuildArg (8.76s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (8.09s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-209100
image_test.go:133: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-209100: (8.0893435s)
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (8.09s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.12s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-209100
E0407 13:33:54.462241    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
image_test.go:88: (dbg) Done: out/minikube-windows-amd64.exe image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-209100: (8.1207561s)
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (8.12s)

                                                
                                    
x
+
TestJSONOutput/start/Command (199.03s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-630000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv
E0407 13:36:55.750363    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe start -p json-output-630000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperv: (3m19.0274625s)
--- PASS: TestJSONOutput/start/Command (199.03s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (7.95s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe pause -p json-output-630000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe pause -p json-output-630000 --output=json --user=testUser: (7.9455645s)
--- PASS: TestJSONOutput/pause/Command (7.95s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (7.77s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe unpause -p json-output-630000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe unpause -p json-output-630000 --output=json --user=testUser: (7.7736273s)
--- PASS: TestJSONOutput/unpause/Command (7.77s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (34.13s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-windows-amd64.exe stop -p json-output-630000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-windows-amd64.exe stop -p json-output-630000 --output=json --user=testUser: (34.1283126s)
--- PASS: TestJSONOutput/stop/Command (34.13s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.97s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-windows-amd64.exe start -p json-output-error-333800 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p json-output-error-333800 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (279.9867ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"7f628ce2-ca02-44cc-990c-880ef6b565e0","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-333800] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"f5151345-8138-435e-bce4-f91808f7f98c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=C:\\Users\\jenkins.minikube3\\minikube-integration\\kubeconfig"}}
	{"specversion":"1.0","id":"e2f4cadf-13a0-4a77-a070-6deb393359fb","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"2ab28f7d-d159-4d84-97f7-c79497579fb1","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube"}}
	{"specversion":"1.0","id":"0ca8307e-d734-4ae8-9848-e6b657565b20","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=20598"}}
	{"specversion":"1.0","id":"8a69736b-70ad-4527-891e-e244617c526e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d8351e04-6fc6-46d0-bc7e-84a4ac491f55","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on windows/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-333800" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p json-output-error-333800
--- PASS: TestErrorJSONOutput (0.97s)

                                                
                                    
x
+
TestMainNoArgs (0.23s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-windows-amd64.exe
--- PASS: TestMainNoArgs (0.23s)

                                                
                                    
x
+
TestMinikubeProfile (527.31s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p first-496300 --driver=hyperv
E0407 13:41:55.752240    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p first-496300 --driver=hyperv: (3m15.0791108s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p second-606100 --driver=hyperv
E0407 13:43:18.837329    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:43:54.465062    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
minikube_profile_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p second-606100 --driver=hyperv: (3m19.0616255s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile first-496300
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.9532982s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-windows-amd64.exe profile second-606100
minikube_profile_test.go:55: (dbg) Run:  out/minikube-windows-amd64.exe profile list -ojson
minikube_profile_test.go:55: (dbg) Done: out/minikube-windows-amd64.exe profile list -ojson: (23.9322709s)
helpers_test.go:175: Cleaning up "second-606100" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p second-606100
E0407 13:46:55.754029    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p second-606100: (44.6359349s)
helpers_test.go:175: Cleaning up "first-496300" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p first-496300
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p first-496300: (39.981916s)
--- PASS: TestMinikubeProfile (527.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (151.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-1-007800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv
E0407 13:48:37.550220    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:48:54.468298    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-1-007800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m30.5307753s)
--- PASS: TestMountStart/serial/StartWithMountFirst (151.53s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (9.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-1-007800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-1-007800 ssh -- ls /minikube-host: (9.4302524s)
--- PASS: TestMountStart/serial/VerifyMountFirst (9.43s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (153.28s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-007800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv
E0407 13:51:55.756973    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:98: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-007800 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperv: (2m32.2802423s)
--- PASS: TestMountStart/serial/StartWithMountSecond (153.28s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (9.36s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-007800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-007800 ssh -- ls /minikube-host: (9.3589623s)
--- PASS: TestMountStart/serial/VerifyMountSecond (9.36s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (30.31s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-windows-amd64.exe delete -p mount-start-1-007800 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-windows-amd64.exe delete -p mount-start-1-007800 --alsologtostderr -v=5: (30.3091303s)
--- PASS: TestMountStart/serial/DeleteFirst (30.31s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (9.38s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-007800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-007800 ssh -- ls /minikube-host: (9.3784824s)
--- PASS: TestMountStart/serial/VerifyMountPostDelete (9.38s)

                                                
                                    
x
+
TestMountStart/serial/Stop (26.16s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-windows-amd64.exe stop -p mount-start-2-007800
E0407 13:53:54.470769    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
mount_start_test.go:155: (dbg) Done: out/minikube-windows-amd64.exe stop -p mount-start-2-007800: (26.1599529s)
--- PASS: TestMountStart/serial/Stop (26.16s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (117.97s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-windows-amd64.exe start -p mount-start-2-007800
mount_start_test.go:166: (dbg) Done: out/minikube-windows-amd64.exe start -p mount-start-2-007800: (1m56.9746522s)
--- PASS: TestMountStart/serial/RestartStopped (117.97s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (9.51s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-windows-amd64.exe -p mount-start-2-007800 ssh -- ls /minikube-host
mount_start_test.go:114: (dbg) Done: out/minikube-windows-amd64.exe -p mount-start-2-007800 ssh -- ls /minikube-host: (9.5079945s)
--- PASS: TestMountStart/serial/VerifyMountPostStop (9.51s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (430.34s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-windows-amd64.exe start -p multinode-140200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv
E0407 13:58:54.472740    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 13:59:58.846514    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 14:01:55.759675    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:96: (dbg) Done: out/minikube-windows-amd64.exe start -p multinode-140200 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperv: (6m46.4638638s)
multinode_test.go:102: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 status --alsologtostderr
E0407 14:03:54.473711    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:102: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 status --alsologtostderr: (23.8769567s)
--- PASS: TestMultiNode/serial/FreshStart2Nodes (430.34s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (9.98s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- rollout status deployment/busybox: (3.9972009s)
multinode_test.go:505: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-kt4sh -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Done: out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-kt4sh -- nslookup kubernetes.io: (1.9513427s)
multinode_test.go:536: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-vgl84 -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-kt4sh -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-vgl84 -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-kt4sh -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-windows-amd64.exe kubectl -p multinode-140200 -- exec busybox-58667487b6-vgl84 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (9.98s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (239.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-windows-amd64.exe node add -p multinode-140200 -v 3 --alsologtostderr
E0407 14:05:17.559820    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 14:06:55.761915    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:121: (dbg) Done: out/minikube-windows-amd64.exe node add -p multinode-140200 -v 3 --alsologtostderr: (3m23.8907957s)
multinode_test.go:127: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 status --alsologtostderr
E0407 14:08:54.475343    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:127: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 status --alsologtostderr: (35.6077431s)
--- PASS: TestMultiNode/serial/AddNode (239.50s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-140200 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.17s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (35.8s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-windows-amd64.exe profile list --output json
multinode_test.go:143: (dbg) Done: out/minikube-windows-amd64.exe profile list --output json: (35.8010187s)
--- PASS: TestMultiNode/serial/ProfileList (35.80s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (371.86s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 status --output json --alsologtostderr
multinode_test.go:184: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 status --output json --alsologtostderr: (36.191107s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp testdata\cp-test.txt multinode-140200:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp testdata\cp-test.txt multinode-140200:/home/docker/cp-test.txt: (9.8755414s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test.txt": (9.8427491s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile1343764680\001\cp-test_multinode-140200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile1343764680\001\cp-test_multinode-140200.txt: (9.4315628s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test.txt": (9.6085345s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200:/home/docker/cp-test.txt multinode-140200-m02:/home/docker/cp-test_multinode-140200_multinode-140200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200:/home/docker/cp-test.txt multinode-140200-m02:/home/docker/cp-test_multinode-140200_multinode-140200-m02.txt: (16.5094138s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test.txt": (9.4682761s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test_multinode-140200_multinode-140200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test_multinode-140200_multinode-140200-m02.txt": (9.5183246s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200:/home/docker/cp-test.txt multinode-140200-m03:/home/docker/cp-test_multinode-140200_multinode-140200-m03.txt
E0407 14:11:55.764712    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200:/home/docker/cp-test.txt multinode-140200-m03:/home/docker/cp-test_multinode-140200_multinode-140200-m03.txt: (16.424618s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test.txt": (9.5547346s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test_multinode-140200_multinode-140200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test_multinode-140200_multinode-140200-m03.txt": (9.4797929s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp testdata\cp-test.txt multinode-140200-m02:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp testdata\cp-test.txt multinode-140200-m02:/home/docker/cp-test.txt: (9.49003s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test.txt": (9.541495s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile1343764680\001\cp-test_multinode-140200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m02:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile1343764680\001\cp-test_multinode-140200-m02.txt: (9.5152376s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test.txt": (9.4195329s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m02:/home/docker/cp-test.txt multinode-140200:/home/docker/cp-test_multinode-140200-m02_multinode-140200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m02:/home/docker/cp-test.txt multinode-140200:/home/docker/cp-test_multinode-140200-m02_multinode-140200.txt: (16.7090003s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test.txt": (9.5645219s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test_multinode-140200-m02_multinode-140200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test_multinode-140200-m02_multinode-140200.txt": (9.4746757s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m02:/home/docker/cp-test.txt multinode-140200-m03:/home/docker/cp-test_multinode-140200-m02_multinode-140200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m02:/home/docker/cp-test.txt multinode-140200-m03:/home/docker/cp-test_multinode-140200-m02_multinode-140200-m03.txt: (16.4590148s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test.txt"
E0407 14:13:54.478908    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test.txt": (9.4521093s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test_multinode-140200-m02_multinode-140200-m03.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test_multinode-140200-m02_multinode-140200-m03.txt": (9.4110069s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp testdata\cp-test.txt multinode-140200-m03:/home/docker/cp-test.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp testdata\cp-test.txt multinode-140200-m03:/home/docker/cp-test.txt: (9.6516734s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test.txt": (9.7470398s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile1343764680\001\cp-test_multinode-140200-m03.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m03:/home/docker/cp-test.txt C:\Users\jenkins.minikube3\AppData\Local\Temp\TestMultiNodeserialCopyFile1343764680\001\cp-test_multinode-140200-m03.txt: (10.0171787s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test.txt": (9.97806s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m03:/home/docker/cp-test.txt multinode-140200:/home/docker/cp-test_multinode-140200-m03_multinode-140200.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m03:/home/docker/cp-test.txt multinode-140200:/home/docker/cp-test_multinode-140200-m03_multinode-140200.txt: (18.1746989s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test.txt": (10.3705842s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test_multinode-140200-m03_multinode-140200.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200 "sudo cat /home/docker/cp-test_multinode-140200-m03_multinode-140200.txt": (10.454484s)
helpers_test.go:556: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m03:/home/docker/cp-test.txt multinode-140200-m02:/home/docker/cp-test_multinode-140200-m03_multinode-140200-m02.txt
helpers_test.go:556: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 cp multinode-140200-m03:/home/docker/cp-test.txt multinode-140200-m02:/home/docker/cp-test_multinode-140200-m03_multinode-140200-m02.txt: (18.1884824s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m03 "sudo cat /home/docker/cp-test.txt": (10.0269987s)
helpers_test.go:534: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test_multinode-140200-m03_multinode-140200-m02.txt"
helpers_test.go:534: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 ssh -n multinode-140200-m02 "sudo cat /home/docker/cp-test_multinode-140200-m03_multinode-140200-m02.txt": (10.2827855s)
--- PASS: TestMultiNode/serial/CopyFile (371.86s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (82.16s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 node stop m03: (26.4942752s)
multinode_test.go:254: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 status
E0407 14:16:38.856584    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 14:16:55.765919    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-140200 status: exit status 7 (27.9764887s)

                                                
                                                
-- stdout --
	multinode-140200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-140200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-140200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-windows-amd64.exe -p multinode-140200 status --alsologtostderr: exit status 7 (27.687152s)

                                                
                                                
-- stdout --
	multinode-140200
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-140200-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-140200-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 14:16:57.382562    2620 out.go:345] Setting OutFile to fd 1736 ...
	I0407 14:16:57.461910    2620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:16:57.461910    2620 out.go:358] Setting ErrFile to fd 796...
	I0407 14:16:57.461910    2620 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 14:16:57.480976    2620 out.go:352] Setting JSON to false
	I0407 14:16:57.480976    2620 mustload.go:65] Loading cluster: multinode-140200
	I0407 14:16:57.480976    2620 notify.go:220] Checking for updates...
	I0407 14:16:57.482241    2620 config.go:182] Loaded profile config "multinode-140200": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 14:16:57.482241    2620 status.go:174] checking status of multinode-140200 ...
	I0407 14:16:57.483132    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:16:59.805045    2620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:16:59.805340    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:16:59.805340    2620 status.go:371] multinode-140200 host status = "Running" (err=<nil>)
	I0407 14:16:59.805422    2620 host.go:66] Checking if "multinode-140200" exists ...
	I0407 14:16:59.806316    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:17:02.174819    2620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:17:02.175544    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:02.175676    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:17:04.916265    2620 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 14:17:04.916265    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:04.916265    2620 host.go:66] Checking if "multinode-140200" exists ...
	I0407 14:17:04.930010    2620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 14:17:04.930010    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200 ).state
	I0407 14:17:07.205265    2620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:17:07.205265    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:07.205699    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200 ).networkadapters[0]).ipaddresses[0]
	I0407 14:17:09.874299    2620 main.go:141] libmachine: [stdout =====>] : 172.17.92.89
	
	I0407 14:17:09.874299    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:09.875093    2620 sshutil.go:53] new ssh client: &{IP:172.17.92.89 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200\id_rsa Username:docker}
	I0407 14:17:09.978067    2620 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0480201s)
	I0407 14:17:09.990649    2620 ssh_runner.go:195] Run: systemctl --version
	I0407 14:17:10.019759    2620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:17:10.053598    2620 kubeconfig.go:125] found "multinode-140200" server: "https://172.17.92.89:8443"
	I0407 14:17:10.053676    2620 api_server.go:166] Checking apiserver status ...
	I0407 14:17:10.064302    2620 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0407 14:17:10.107715    2620 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2102/cgroup
	W0407 14:17:10.133244    2620 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2102/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0407 14:17:10.147368    2620 ssh_runner.go:195] Run: ls
	I0407 14:17:10.155150    2620 api_server.go:253] Checking apiserver healthz at https://172.17.92.89:8443/healthz ...
	I0407 14:17:10.163978    2620 api_server.go:279] https://172.17.92.89:8443/healthz returned 200:
	ok
	I0407 14:17:10.163978    2620 status.go:463] multinode-140200 apiserver status = Running (err=<nil>)
	I0407 14:17:10.163978    2620 status.go:176] multinode-140200 status: &{Name:multinode-140200 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0407 14:17:10.163978    2620 status.go:174] checking status of multinode-140200-m02 ...
	I0407 14:17:10.164785    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:17:12.474711    2620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:17:12.474711    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:12.474711    2620 status.go:371] multinode-140200-m02 host status = "Running" (err=<nil>)
	I0407 14:17:12.474711    2620 host.go:66] Checking if "multinode-140200-m02" exists ...
	I0407 14:17:12.475574    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:17:14.774708    2620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:17:14.774708    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:14.775734    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:17:17.545453    2620 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:17:17.546295    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:17.546295    2620 host.go:66] Checking if "multinode-140200-m02" exists ...
	I0407 14:17:17.558319    2620 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0407 14:17:17.558319    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m02 ).state
	I0407 14:17:19.819091    2620 main.go:141] libmachine: [stdout =====>] : Running
	
	I0407 14:17:19.819437    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:19.819515    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive (( Hyper-V\Get-VM multinode-140200-m02 ).networkadapters[0]).ipaddresses[0]
	I0407 14:17:22.530929    2620 main.go:141] libmachine: [stdout =====>] : 172.17.82.40
	
	I0407 14:17:22.531576    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:22.531939    2620 sshutil.go:53] new ssh client: &{IP:172.17.82.40 Port:22 SSHKeyPath:C:\Users\jenkins.minikube3\minikube-integration\.minikube\machines\multinode-140200-m02\id_rsa Username:docker}
	I0407 14:17:22.633825    2620 ssh_runner.go:235] Completed: sh -c "df -h /var | awk 'NR==2{print $5}'": (5.0754688s)
	I0407 14:17:22.645082    2620 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0407 14:17:22.670423    2620 status.go:176] multinode-140200-m02 status: &{Name:multinode-140200-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0407 14:17:22.670423    2620 status.go:174] checking status of multinode-140200-m03 ...
	I0407 14:17:22.670423    2620 main.go:141] libmachine: [executing ==>] : C:\WINDOWS\System32\WindowsPowerShell\v1.0\powershell.exe -NoProfile -NonInteractive ( Hyper-V\Get-VM multinode-140200-m03 ).state
	I0407 14:17:24.911074    2620 main.go:141] libmachine: [stdout =====>] : Off
	
	I0407 14:17:24.911074    2620 main.go:141] libmachine: [stderr =====>] : 
	I0407 14:17:24.911977    2620 status.go:371] multinode-140200-m03 host status = "Stopped" (err=<nil>)
	I0407 14:17:24.911977    2620 status.go:384] host is not running, skipping remaining checks
	I0407 14:17:24.911977    2620 status.go:176] multinode-140200-m03 status: &{Name:multinode-140200-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (82.16s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (205.48s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 node start m03 -v=7 --alsologtostderr
E0407 14:18:54.479800    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
multinode_test.go:282: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 node start m03 -v=7 --alsologtostderr: (2m47.4573155s)
multinode_test.go:290: (dbg) Run:  out/minikube-windows-amd64.exe -p multinode-140200 status -v=7 --alsologtostderr
multinode_test.go:290: (dbg) Done: out/minikube-windows-amd64.exe -p multinode-140200 status -v=7 --alsologtostderr: (37.8375116s)
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (205.48s)

                                                
                                    
x
+
TestPreload (539.01s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-279400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4
E0407 14:31:55.773617    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 14:33:18.867302    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 14:33:54.487094    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:44: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-279400 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperv --kubernetes-version=v1.24.4: (4m41.3918876s)
preload_test.go:52: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-279400 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-279400 image pull gcr.io/k8s-minikube/busybox: (9.5019934s)
preload_test.go:58: (dbg) Run:  out/minikube-windows-amd64.exe stop -p test-preload-279400
preload_test.go:58: (dbg) Done: out/minikube-windows-amd64.exe stop -p test-preload-279400: (41.2176068s)
preload_test.go:66: (dbg) Run:  out/minikube-windows-amd64.exe start -p test-preload-279400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv
E0407 14:36:55.775824    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
preload_test.go:66: (dbg) Done: out/minikube-windows-amd64.exe start -p test-preload-279400 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperv: (2m37.0864953s)
preload_test.go:71: (dbg) Run:  out/minikube-windows-amd64.exe -p test-preload-279400 image list
preload_test.go:71: (dbg) Done: out/minikube-windows-amd64.exe -p test-preload-279400 image list: (7.4453899s)
helpers_test.go:175: Cleaning up "test-preload-279400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p test-preload-279400
E0407 14:38:37.579943    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p test-preload-279400: (42.3646257s)
--- PASS: TestPreload (539.01s)

                                                
                                    
x
+
TestScheduledStopWindows (337.14s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-windows-amd64.exe start -p scheduled-stop-461500 --memory=2048 --driver=hyperv
E0407 14:38:54.489797    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 14:41:55.777489    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
scheduled_stop_test.go:128: (dbg) Done: out/minikube-windows-amd64.exe start -p scheduled-stop-461500 --memory=2048 --driver=hyperv: (3m23.215883s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-461500 --schedule 5m
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-461500 --schedule 5m: (10.9766221s)
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-461500 -n scheduled-stop-461500
scheduled_stop_test.go:191: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.TimeToStop}} -p scheduled-stop-461500 -n scheduled-stop-461500: exit status 1 (10.0121809s)
scheduled_stop_test.go:191: status error: exit status 1 (may be ok)
scheduled_stop_test.go:54: (dbg) Run:  out/minikube-windows-amd64.exe ssh -p scheduled-stop-461500 -- sudo systemctl show minikube-scheduled-stop --no-page
scheduled_stop_test.go:54: (dbg) Done: out/minikube-windows-amd64.exe ssh -p scheduled-stop-461500 -- sudo systemctl show minikube-scheduled-stop --no-page: (9.9245696s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-windows-amd64.exe stop -p scheduled-stop-461500 --schedule 5s
scheduled_stop_test.go:137: (dbg) Done: out/minikube-windows-amd64.exe stop -p scheduled-stop-461500 --schedule 5s: (10.9318648s)
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-windows-amd64.exe status -p scheduled-stop-461500
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p scheduled-stop-461500: exit status 7 (2.5612829s)

                                                
                                                
-- stdout --
	scheduled-stop-461500
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-461500 -n scheduled-stop-461500
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status --format={{.Host}} -p scheduled-stop-461500 -n scheduled-stop-461500: exit status 7 (2.4745149s)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-461500" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p scheduled-stop-461500
E0407 14:43:54.491400    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p scheduled-stop-461500: (27.0341904s)
--- PASS: TestScheduledStopWindows (337.14s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (1105.26s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1104415428.exe start -p running-upgrade-817400 --memory=2200 --vm-driver=hyperv
E0407 14:46:55.781338    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:120: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.1104415428.exe start -p running-upgrade-817400 --memory=2200 --vm-driver=hyperv: (8m26.9463456s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-windows-amd64.exe start -p running-upgrade-817400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0407 14:53:54.497698    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
E0407 14:55:17.599829    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:130: (dbg) Done: out/minikube-windows-amd64.exe start -p running-upgrade-817400 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (8m42.6546774s)
helpers_test.go:175: Cleaning up "running-upgrade-817400" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-windows-amd64.exe delete -p running-upgrade-817400
E0407 15:01:55.787779    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
helpers_test.go:178: (dbg) Done: out/minikube-windows-amd64.exe delete -p running-upgrade-817400: (1m14.6389867s)
--- PASS: TestRunningBinaryUpgrade (1105.26s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-windows-amd64.exe start -p NoKubernetes-817400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p NoKubernetes-817400 --no-kubernetes --kubernetes-version=1.20 --driver=hyperv: exit status 14 (386.594ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-817400] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.39s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.96s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (925.04s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.4280279205.exe start -p stopped-upgrade-523500 --memory=2200 --vm-driver=hyperv
E0407 14:49:58.878215    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:183: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.4280279205.exe start -p stopped-upgrade-523500 --memory=2200 --vm-driver=hyperv: (8m17.8331357s)
version_upgrade_test.go:192: (dbg) Run:  C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.4280279205.exe -p stopped-upgrade-523500 stop
version_upgrade_test.go:192: (dbg) Done: C:\Users\jenkins.minikube3\AppData\Local\Temp\minikube-v1.26.0.4280279205.exe -p stopped-upgrade-523500 stop: (36.0307044s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-windows-amd64.exe start -p stopped-upgrade-523500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv
E0407 14:58:54.499882    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\functional-168700\\client.crt: The system cannot find the path specified." logger="UnhandledError"
version_upgrade_test.go:198: (dbg) Done: out/minikube-windows-amd64.exe start -p stopped-upgrade-523500 --memory=2200 --alsologtostderr -v=1 --driver=hyperv: (6m31.1719542s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (925.04s)

                                                
                                    
x
+
TestPause/serial/Start (549.14s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-061700 --memory=2048 --install-addons=false --wait=all --driver=hyperv
E0407 14:51:55.782591    7728 cert_rotation.go:171] "Unhandled Error" err="key failed with : open C:\\Users\\jenkins.minikube3\\minikube-integration\\.minikube\\profiles\\addons-823400\\client.crt: The system cannot find the path specified." logger="UnhandledError"
pause_test.go:80: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-061700 --memory=2048 --install-addons=false --wait=all --driver=hyperv: (9m9.1432497s)
--- PASS: TestPause/serial/Start (549.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (320.18s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-windows-amd64.exe start -p pause-061700 --alsologtostderr -v=1 --driver=hyperv
pause_test.go:92: (dbg) Done: out/minikube-windows-amd64.exe start -p pause-061700 --alsologtostderr -v=1 --driver=hyperv: (5m20.1393104s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (320.18s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (10.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-windows-amd64.exe logs -p stopped-upgrade-523500
version_upgrade_test.go:206: (dbg) Done: out/minikube-windows-amd64.exe logs -p stopped-upgrade-523500: (10.0835372s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (10.08s)

                                                
                                    
x
+
TestPause/serial/Pause (9.13s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-windows-amd64.exe pause -p pause-061700 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-windows-amd64.exe pause -p pause-061700 --alsologtostderr -v=5: (9.1274392s)
--- PASS: TestPause/serial/Pause (9.13s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (13.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-windows-amd64.exe status -p pause-061700 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-windows-amd64.exe status -p pause-061700 --output=json --layout=cluster: exit status 2 (13.1891262s)

                                                
                                                
-- stdout --
	{"Name":"pause-061700","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.35.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-061700","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (13.19s)

                                                
                                    

Test skip (33/209)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.32.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.32.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.32.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.32.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/RealCredentials (0s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/RealCredentials
addons_test.go:698: This test requires a GCE instance (excluding Cloud Shell) with a container based driver
--- SKIP: TestAddons/serial/GCPAuth/RealCredentials (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:422: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestAddons/parallel/AmdGpuDevicePlugin (0s)

                                                
                                                
=== RUN   TestAddons/parallel/AmdGpuDevicePlugin
=== PAUSE TestAddons/parallel/AmdGpuDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/AmdGpuDevicePlugin
addons_test.go:972: skip amd gpu test on all but docker driver and amd64 platform
--- SKIP: TestAddons/parallel/AmdGpuDevicePlugin (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false windows amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:922: (dbg) daemon: [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-168700 --alsologtostderr -v=1]
functional_test.go:933: output didn't produce a URL
functional_test.go:927: (dbg) stopping [out/minikube-windows-amd64.exe dashboard --url --port 36195 -p functional-168700 --alsologtostderr -v=1] ...
helpers_test.go:502: unable to terminate pid 9020: Access is denied.
--- SKIP: TestFunctional/parallel/DashboardCmd (300.01s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (5.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:991: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-168700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:991: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-168700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0138524s)

                                                
                                                
-- stdout --
	* [functional-168700] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:50:29.531867    7080 out.go:345] Setting OutFile to fd 872 ...
	I0407 12:50:29.612984    7080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:50:29.612984    7080 out.go:358] Setting ErrFile to fd 1560...
	I0407 12:50:29.612984    7080 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:50:29.630993    7080 out.go:352] Setting JSON to false
	I0407 12:50:29.634354    7080 start.go:129] hostinfo: {"hostname":"minikube3","uptime":2022,"bootTime":1744028207,"procs":184,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 12:50:29.634491    7080 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 12:50:29.640546    7080 out.go:177] * [functional-168700] minikube v1.35.0 on Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 12:50:29.643437    7080 notify.go:220] Checking for updates...
	I0407 12:50:29.646148    7080 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 12:50:29.649013    7080 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:50:29.651069    7080 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 12:50:29.654082    7080 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:50:29.658076    7080 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:50:29.661077    7080 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:50:29.662067    7080 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:997: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/DryRun (5.01s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1037: (dbg) Run:  out/minikube-windows-amd64.exe start -p functional-168700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv
functional_test.go:1037: (dbg) Non-zero exit: out/minikube-windows-amd64.exe start -p functional-168700 --dry-run --memory 250MB --alsologtostderr --driver=hyperv: exit status 1 (5.0430759s)

                                                
                                                
-- stdout --
	* [functional-168700] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	  - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	  - MINIKUBE_FORCE_SYSTEMD=
	  - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	  - MINIKUBE_LOCATION=20598
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true

                                                
                                                
-- /stdout --
** stderr ** 
	I0407 12:50:34.576181    8780 out.go:345] Setting OutFile to fd 1792 ...
	I0407 12:50:34.679937    8780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:50:34.679937    8780 out.go:358] Setting ErrFile to fd 1796...
	I0407 12:50:34.679937    8780 out.go:392] TERM=,COLORTERM=, which probably does not support color
	I0407 12:50:34.704941    8780 out.go:352] Setting JSON to false
	I0407 12:50:34.709926    8780 start.go:129] hostinfo: {"hostname":"minikube3","uptime":2027,"bootTime":1744028207,"procs":186,"os":"windows","platform":"Microsoft Windows 10 Enterprise N","platformFamily":"Standalone Workstation","platformVersion":"10.0.19045.5679 Build 19045.5679","kernelVersion":"10.0.19045.5679 Build 19045.5679","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"a0f355d5-8b6e-4346-9071-73232725d096"}
	W0407 12:50:34.709926    8780 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0407 12:50:34.713929    8780 out.go:177] * [functional-168700] minikube v1.35.0 sur Microsoft Windows 10 Enterprise N 10.0.19045.5679 Build 19045.5679
	I0407 12:50:34.716934    8780 notify.go:220] Checking for updates...
	I0407 12:50:34.718937    8780 out.go:177]   - KUBECONFIG=C:\Users\jenkins.minikube3\minikube-integration\kubeconfig
	I0407 12:50:34.725931    8780 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0407 12:50:34.728945    8780 out.go:177]   - MINIKUBE_HOME=C:\Users\jenkins.minikube3\minikube-integration\.minikube
	I0407 12:50:34.731933    8780 out.go:177]   - MINIKUBE_LOCATION=20598
	I0407 12:50:34.734930    8780 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0407 12:50:34.738932    8780 config.go:182] Loaded profile config "functional-168700": Driver=hyperv, ContainerRuntime=docker, KubernetesVersion=v1.32.2
	I0407 12:50:34.740930    8780 driver.go:394] Setting default libvirt URI to qemu:///system

                                                
                                                
** /stderr **
functional_test.go:1042: skipping this error on HyperV till this issue is solved https://github.com/kubernetes/minikube/issues/9785
--- SKIP: TestFunctional/parallel/InternationalLanguage (5.04s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd
=== PAUSE TestFunctional/parallel/MountCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MountCmd
functional_test_mount_test.go:57: skipping: mount broken on hyperv: https://github.com/kubernetes/minikube/issues/5029
--- SKIP: TestFunctional/parallel/MountCmd (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:567: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:230: The test WaitService/IngressIP is broken on hyperv https://github.com/kubernetes/minikube/issues/8381
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:258: skipping: access direct test is broken on windows: https://github.com/kubernetes/minikube/issues/8304
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:99: DNS forwarding is only supported for Hyperkit on Darwin, skipping test DNS forwarding
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.00s)

                                                
                                    
x
+
TestFunctionalNewestKubernetes (0s)

                                                
                                                
=== RUN   TestFunctionalNewestKubernetes
functional_test.go:84: 
--- SKIP: TestFunctionalNewestKubernetes (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopUnix (0s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:76: test only runs on unix
--- SKIP: TestScheduledStopUnix (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:39: skipping due to https://github.com/kubernetes/minikube/issues/14232
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard