Test Report: Hyperkit_macOS 18774

                    
                      9d63d58ff18723161685b0b8e892cfd1b7c2a23e:2024-04-29:34260
                    
                

Test fail (10/223)

x
+
TestCertOptions (81.64s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-452000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
cert_options_test.go:49: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-options-452000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : exit status 90 (1m15.903171932s)

                                                
                                                
-- stdout --
	* [cert-options-452000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "cert-options-452000" primary control-plane node in "cert-options-452000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2048MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 20:24:50 cert-options-452000 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:24:50 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:50.487097550Z" level=info msg="Starting up"
	Apr 29 20:24:50 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:50.487607202Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 20:24:50 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:50.488271963Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=523
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.502921923Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518209096Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518279622Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518345808Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518380253Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518459388Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518496528Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518643759Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518683148Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518717214Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518745488Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518825626Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.518991915Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.521133001Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.521185215Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.521316547Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.521362904Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.521464250Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.521534429Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.521568799Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524118290Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524207747Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524253044Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524285394Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524315992Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524409341Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524642935Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524745353Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524781978Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524814061Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524845192Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524874396Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524902843Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524932395Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.524972069Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525007166Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525036724Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525067133Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525112859Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525152694Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525214509Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525251450Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525281506Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525312004Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525340705Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525370090Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525399269Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525430377Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525459185Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525487742Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525520329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525552241Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525585991Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525616433Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525645194Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525714637Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525756332Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525785654Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525813152Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525900760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525936091Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.525967230Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.526137112Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.526196648Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.526279937Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 20:24:50 cert-options-452000 dockerd[523]: time="2024-04-29T20:24:50.526339455Z" level=info msg="containerd successfully booted in 0.023892s"
	Apr 29 20:24:51 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:51.505676781Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 20:24:51 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:51.515385401Z" level=info msg="Loading containers: start."
	Apr 29 20:24:51 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:51.630389140Z" level=info msg="Loading containers: done."
	Apr 29 20:24:51 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:51.641839165Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 20:24:51 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:51.641997939Z" level=info msg="Daemon has completed initialization"
	Apr 29 20:24:51 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:51.674545126Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 20:24:51 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:51.674747956Z" level=info msg="API listen on [::]:2376"
	Apr 29 20:24:51 cert-options-452000 systemd[1]: Started Docker Application Container Engine.
	Apr 29 20:24:52 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:52.675400554Z" level=info msg="Processing signal 'terminated'"
	Apr 29 20:24:52 cert-options-452000 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 20:24:52 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:52.676722426Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 20:24:52 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:52.676885992Z" level=info msg="Daemon shutdown complete"
	Apr 29 20:24:52 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:52.676973758Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 20:24:52 cert-options-452000 dockerd[517]: time="2024-04-29T20:24:52.677150842Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 20:24:53 cert-options-452000 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 20:24:53 cert-options-452000 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 20:24:53 cert-options-452000 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 20:24:53 cert-options-452000 dockerd[867]: time="2024-04-29T20:24:53.728556917Z" level=info msg="Starting up"
	Apr 29 20:25:53 cert-options-452000 dockerd[867]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 20:25:53 cert-options-452000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 20:25:53 cert-options-452000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 20:25:53 cert-options-452000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:51: failed to start minikube with args: "out/minikube-darwin-amd64 start -p cert-options-452000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit " : exit status 90
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-452000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:60: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p cert-options-452000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt": exit status 1 (130.622447ms)

                                                
                                                
-- stdout --
	Can't open /var/lib/minikube/certs/apiserver.crt for reading, No such file or directory
	140629486207040:error:02001002:system library:fopen:No such file or directory:crypto/bio/bss_file.c:69:fopen('/var/lib/minikube/certs/apiserver.crt','r')
	140629486207040:error:2006D080:BIO routines:BIO_new_file:no such file:crypto/bio/bss_file.c:76:
	unable to load certificate

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
cert_options_test.go:62: failed to read apiserver cert inside minikube. args "out/minikube-darwin-amd64 -p cert-options-452000 ssh \"openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt\"": exit status 1
cert_options_test.go:69: apiserver cert does not include 127.0.0.1 in SAN.
cert_options_test.go:69: apiserver cert does not include 192.168.15.15 in SAN.
cert_options_test.go:69: apiserver cert does not include localhost in SAN.
cert_options_test.go:69: apiserver cert does not include www.google.com in SAN.
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-452000 config view
cert_options_test.go:93: Kubeconfig apiserver server port incorrect. Output of 
'kubectl config view' = "\n-- stdout --\n\tapiVersion: v1\n\tclusters:\n\t- cluster:\n\t    certificate-authority: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt\n\t    extensions:\n\t    - extension:\n\t        last-update: Mon, 29 Apr 2024 13:24:38 PDT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.33.0\n\t      name: cluster_info\n\t    server: https://192.169.0.73:8443\n\t  name: cert-expiration-660000\n\tcontexts:\n\t- context:\n\t    cluster: cert-expiration-660000\n\t    extensions:\n\t    - extension:\n\t        last-update: Mon, 29 Apr 2024 13:24:38 PDT\n\t        provider: minikube.sigs.k8s.io\n\t        version: v1.33.0\n\t      name: context_info\n\t    namespace: default\n\t    user: cert-expiration-660000\n\t  name: cert-expiration-660000\n\tcurrent-context: cert-expiration-660000\n\tkind: Config\n\tpreferences: {}\n\tusers:\n\t- name: cert-expiration-660000\n\t  user:\n\t    client-certificate: /Users/jenkins/minikube-integration/18774-15855/.minikub
e/profiles/cert-expiration-660000/client.crt\n\t    client-key: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/cert-expiration-660000/client.key\n\n-- /stdout --"
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-452000 -- "sudo cat /etc/kubernetes/admin.conf"
cert_options_test.go:100: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p cert-options-452000 -- "sudo cat /etc/kubernetes/admin.conf": exit status 1 (131.71412ms)

                                                
                                                
-- stdout --
	cat: /etc/kubernetes/admin.conf: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
cert_options_test.go:102: failed to SSH to minikube with args: "out/minikube-darwin-amd64 ssh -p cert-options-452000 -- \"sudo cat /etc/kubernetes/admin.conf\"" : exit status 1
cert_options_test.go:106: Internal minikube kubeconfig (admin.conf) does not contains the right api port. 
-- stdout --
	cat: /etc/kubernetes/admin.conf: No such file or directory

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
cert_options_test.go:109: *** TestCertOptions FAILED at 2024-04-29 13:25:54.020671 -0700 PDT m=+4506.320835703
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-452000 -n cert-options-452000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-options-452000 -n cert-options-452000: exit status 6 (155.913708ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 13:25:54.163654   20725 status.go:417] kubeconfig endpoint: get endpoint: "cert-options-452000" does not appear in /Users/jenkins/minikube-integration/18774-15855/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "cert-options-452000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:175: Cleaning up "cert-options-452000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-452000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-452000: (5.284412132s)
--- FAIL: TestCertOptions (81.64s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (131.68s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-431000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
ha_test.go:101: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-431000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (2m8.899768075s)

                                                
                                                
-- stdout --
	* [ha-431000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Configuring RBAC rules ...
	* Configuring CNI (Container Networking Interface) ...
	  - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	* Enabled addons: storage-provisioner, default-storageclass
	
	* Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Found network options:
	  - NO_PROXY=192.169.0.53
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:25:07.932524   17722 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:25:07.932931   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.932939   17722 out.go:304] Setting ErrFile to fd 2...
	I0429 12:25:07.932945   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.933219   17722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:25:07.935193   17722 out.go:298] Setting JSON to false
	I0429 12:25:07.966866   17722 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10477,"bootTime":1714408230,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 12:25:07.967005   17722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:25:07.990175   17722 out.go:177] * [ha-431000] minikube v1.33.0 on Darwin 14.4.1
	I0429 12:25:08.073413   17722 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 12:25:08.048486   17722 notify.go:220] Checking for updates...
	I0429 12:25:08.160225   17722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:08.223714   17722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 12:25:08.245577   17722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:25:08.266426   17722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.296555   17722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:25:08.318007   17722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:25:08.348195   17722 out.go:177] * Using the hyperkit driver based on user configuration
	I0429 12:25:08.391518   17722 start.go:297] selected driver: hyperkit
	I0429 12:25:08.391552   17722 start.go:901] validating driver "hyperkit" against <nil>
	I0429 12:25:08.391574   17722 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:25:08.395979   17722 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.396124   17722 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18774-15855/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0429 12:25:08.404644   17722 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0429 12:25:08.408717   17722 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.408755   17722 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0429 12:25:08.408783   17722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:25:08.409041   17722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:25:08.409091   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:08.409101   17722 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 12:25:08.409113   17722 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:25:08.409179   17722 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:08.409294   17722 iso.go:125] acquiring lock: {Name:mk81eeca500d1c63233c315450a8b707db3bb810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.451413   17722 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0429 12:25:08.472458   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:08.472552   17722 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 12:25:08.472575   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:08.472808   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:08.472832   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:08.473307   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:08.473347   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json: {Name:mk35a218377c5d37033d174261b78cb0d686fe80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:08.474156   17722 start.go:360] acquireMachinesLock for ha-431000: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:08.474283   17722 start.go:364] duration metric: took 104.507µs to acquireMachinesLock for "ha-431000"
	I0429 12:25:08.474332   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:08.474420   17722 start.go:125] createHost starting for "" (driver="hyperkit")
	I0429 12:25:08.516479   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:08.516777   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.516851   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:08.526733   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61961
	I0429 12:25:08.527153   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:08.527581   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:08.527592   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:08.527830   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:08.527946   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:08.528043   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:08.528148   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:08.528173   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:08.528213   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:08.528263   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528280   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528335   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:08.528371   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528388   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528404   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:08.528414   17722 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0429 12:25:08.528488   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.528641   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:08.529147   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:08.529155   17722 main.go:141] libmachine: (ha-431000) Calling .Create
	I0429 12:25:08.529220   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.529331   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.529213   17730 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.529403   17722 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:08.704997   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.704900   17730 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa...
	I0429 12:25:08.942793   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.942700   17730 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0429 12:25:08.942814   17722 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0429 12:25:08.942822   17722 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0429 12:25:08.943194   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.943155   17730 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000 ...
	I0429 12:25:09.300525   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.300543   17722 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid
	I0429 12:25:09.300607   17722 main.go:141] libmachine: (ha-431000) DBG | Using UUID 24bdec35-56f8-46dd-965c-91e20eb4b3f3
	I0429 12:25:09.409075   17722 main.go:141] libmachine: (ha-431000) DBG | Generated MAC 12:79:5a:6e:b2:5a
	I0429 12:25:09.409094   17722 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:09.409125   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409150   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409187   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "24bdec35-56f8-46dd-965c-91e20eb4b3f3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:09.409213   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 24bdec35-56f8-46dd-965c-91e20eb4b3f3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:09.409223   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:09.412103   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Pid is 17733
	I0429 12:25:09.412560   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0429 12:25:09.412590   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.412663   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:09.413636   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:09.413744   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:09.413764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:09.413783   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:09.413800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:09.413811   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:09.413824   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:09.413850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:09.413862   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:09.413875   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:09.413883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:09.413914   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:09.413927   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:09.413957   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:09.413970   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:09.413986   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:09.413992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:09.414001   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:09.414010   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:09.414024   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:09.414041   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:09.414056   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:09.414068   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:09.414082   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:09.414098   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:09.414114   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:09.414144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:09.414157   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:09.414169   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:09.414178   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:09.414187   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:09.414196   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:09.414206   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:09.414215   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:09.414231   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:09.414248   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:09.414272   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:09.414288   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:09.414300   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:09.414315   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:09.414327   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:09.414339   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:09.414350   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:09.414363   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:09.414382   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:09.414397   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:09.414414   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:09.414428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:09.414441   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:09.414455   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:09.414467   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:09.414490   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:09.414509   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:09.419450   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:09.470495   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:09.471088   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.471110   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.471118   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.471128   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.850480   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:09.850496   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:09.966742   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.966763   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.966770   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.966776   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.967629   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:09.967639   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:25:11.414538   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0429 12:25:11.414553   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:11.414604   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:11.415399   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:11.415486   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:11.415495   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:11.415504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:11.415511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:11.415518   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:11.415539   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:11.415546   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:11.415584   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:11.415593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:11.415606   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:11.415614   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:11.415624   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:11.415633   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:11.415641   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:11.415647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:11.415654   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:11.415660   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:11.415667   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:11.415676   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:11.415682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:11.415689   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:11.415695   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:11.415702   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:11.415709   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:11.415714   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:11.415722   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:11.415729   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:11.415749   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:11.415755   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:11.415763   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:11.415771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:11.415782   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:11.415790   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:11.415798   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:11.415804   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:11.415809   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:11.415816   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:11.415822   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:11.415828   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:11.415836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:11.415844   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:11.415850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:11.415858   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:11.415864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:11.415872   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:11.415879   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:11.415886   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:11.415893   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:11.415898   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:11.415910   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:11.415924   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:11.415936   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:13.416614   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0429 12:25:13.416628   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:13.416689   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:13.417651   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:13.417739   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:13.417747   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:13.417765   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:13.417774   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:13.417791   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:13.417803   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:13.417810   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:13.417818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:13.417823   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:13.417830   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:13.417836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:13.417843   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:13.417849   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:13.417861   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:13.417870   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:13.417883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:13.417895   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:13.417902   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:13.417909   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:13.417922   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:13.417935   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:13.417949   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:13.417955   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:13.417969   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:13.417980   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:13.417992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:13.418008   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:13.418016   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:13.418023   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:13.418031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:13.418038   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:13.418044   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:13.418052   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:13.418059   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:13.418065   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:13.418083   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:13.418093   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:13.418101   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:13.418108   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:13.418116   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:13.418123   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:13.418130   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:13.418137   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:13.418144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:13.418151   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:13.418163   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:13.418181   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:13.418190   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:13.418198   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:13.418205   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:13.418212   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:13.418220   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:15.211585   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0429 12:25:15.211693   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0429 12:25:15.211704   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0429 12:25:15.236755   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0429 12:25:15.420156   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0429 12:25:15.420179   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:15.420320   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:15.421755   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:15.421973   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:15.421993   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:15.422007   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:15.422103   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:15.422117   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:15.422133   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:15.422156   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:15.422173   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:15.422185   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:15.422194   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:15.422211   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:15.422225   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:15.422235   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:15.422244   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:15.422263   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:15.422280   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:15.422297   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:15.422313   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:15.422336   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:15.422353   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:15.422369   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:15.422380   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:15.422393   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:15.422404   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:15.422412   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:15.422424   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:15.422432   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:15.422442   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:15.422456   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:15.422472   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:15.422492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:15.422511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:15.422535   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:15.422552   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:15.422563   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:15.422571   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:15.422581   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:15.422599   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:15.422609   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:15.422620   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:15.422628   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:15.422639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:15.422670   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:15.422682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:15.422700   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:15.422711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:15.422721   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:15.422732   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:15.422754   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:15.422771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:15.422797   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:15.422818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:17.422424   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0429 12:25:17.422440   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:17.422505   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:17.423336   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:17.423416   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:17.423428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:17.423437   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:17.423446   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:17.423453   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:17.423470   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:17.423477   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:17.423484   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:17.423492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:17.423498   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:17.423504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:17.423514   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:17.423519   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:17.423532   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:17.423542   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:17.423561   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:17.423570   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:17.423577   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:17.423593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:17.423600   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:17.423608   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:17.423615   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:17.423621   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:17.423627   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:17.423634   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:17.423639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:17.423647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:17.423653   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:17.423666   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:17.423674   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:17.423681   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:17.423696   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:17.423704   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:17.423711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:17.423718   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:17.423726   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:17.423733   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:17.423741   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:17.423750   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:17.423758   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:17.423764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:17.423770   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:17.423780   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:17.423792   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:17.423800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:17.423808   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:17.423815   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:17.423821   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:17.423838   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:17.423848   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:17.423855   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:17.423864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:19.424012   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0429 12:25:19.424040   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.424118   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.424931   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:19.425019   17722 main.go:141] libmachine: (ha-431000) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:19.425031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:19.425039   17722 main.go:141] libmachine: (ha-431000) DBG | Found match: 12:79:5a:6e:b2:5a
	I0429 12:25:19.425043   17722 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.53
	I0429 12:25:19.425100   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:19.425750   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425854   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425968   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:25:19.425977   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:19.426076   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.426132   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.426957   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:25:19.426968   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:25:19.426974   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:25:19.426979   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:19.427082   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:19.427181   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427279   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427395   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:19.427528   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:19.427739   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:19.427747   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:25:20.480764   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.480777   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:25:20.480782   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.480910   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.481005   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481084   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481169   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.481302   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.481440   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.481448   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:25:20.531217   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:25:20.531271   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:25:20.531277   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:25:20.531282   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531410   17722 buildroot.go:166] provisioning hostname "ha-431000"
	I0429 12:25:20.531421   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531516   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.531611   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.531708   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531810   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531912   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.532028   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.532159   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.532169   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0429 12:25:20.591606   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0429 12:25:20.591626   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.591764   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.591878   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.591967   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.592068   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.592207   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.592350   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.592361   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:25:20.648101   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.648121   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:25:20.648135   17722 buildroot.go:174] setting up certificates
	I0429 12:25:20.648144   17722 provision.go:84] configureAuth start
	I0429 12:25:20.648150   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.648287   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:20.648383   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.648467   17722 provision.go:143] copyHostCerts
	I0429 12:25:20.648496   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648561   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:25:20.648570   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648715   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:25:20.648916   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.648956   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:25:20.648961   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.649049   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:25:20.649197   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649236   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:25:20.649241   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649330   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:25:20.649473   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.53 ha-431000 localhost minikube]
	I0429 12:25:20.983440   17722 provision.go:177] copyRemoteCerts
	I0429 12:25:20.983549   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:25:20.983566   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.983728   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.983833   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.983973   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.984120   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:21.014934   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:25:21.015004   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:25:21.034390   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:25:21.034453   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 12:25:21.055272   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:25:21.055342   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:25:21.074968   17722 provision.go:87] duration metric: took 426.81697ms to configureAuth
	I0429 12:25:21.074981   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:25:21.075129   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:21.075141   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:21.075289   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.075381   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.075460   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075546   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075623   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.075741   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.075864   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.075872   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:25:21.124122   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:25:21.124135   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:25:21.124211   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:25:21.124224   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.124366   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.124464   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124564   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124654   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.124800   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.124944   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.124987   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:25:21.192987   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:25:21.193010   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.193155   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.193259   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193352   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193439   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.193576   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.193725   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.193737   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:25:22.738902   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:25:22.738918   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:25:22.738926   17722 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0429 12:25:22.739069   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:25:22.739077   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:25:22.739081   17722 client.go:171] duration metric: took 14.211068638s to LocalClient.Create
	I0429 12:25:22.739093   17722 start.go:167] duration metric: took 14.211111126s to libmachine.API.Create "ha-431000"
	I0429 12:25:22.739099   17722 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0429 12:25:22.739106   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:25:22.739115   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.739266   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:25:22.739278   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.739375   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.739470   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.739572   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.739667   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.770970   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:25:22.774135   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:25:22.774148   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:25:22.774248   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:25:22.774464   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:25:22.774471   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:25:22.774683   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:25:22.781634   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:22.801443   17722 start.go:296] duration metric: took 62.337848ms for postStartSetup
	I0429 12:25:22.801469   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:22.802046   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.802212   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:22.802533   17722 start.go:128] duration metric: took 14.328253816s to createHost
	I0429 12:25:22.802553   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.802653   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.802747   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802827   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802911   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.803023   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:22.803149   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:22.803156   17722 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 12:25:22.853273   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418723.058219257
	
	I0429 12:25:22.853285   17722 fix.go:216] guest clock: 1714418723.058219257
	I0429 12:25:22.853294   17722 fix.go:229] Guest: 2024-04-29 12:25:23.058219257 -0700 PDT Remote: 2024-04-29 12:25:22.80254 -0700 PDT m=+14.919884293 (delta=255.679257ms)
	I0429 12:25:22.853306   17722 fix.go:200] guest clock delta is within tolerance: 255.679257ms
	I0429 12:25:22.853309   17722 start.go:83] releasing machines lock for "ha-431000", held for 14.379182495s
	I0429 12:25:22.853329   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853469   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.853555   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853914   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854029   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854103   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:25:22.854130   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854195   17722 ssh_runner.go:195] Run: cat /version.json
	I0429 12:25:22.854205   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854229   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854325   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854339   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854430   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854457   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854550   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.854559   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854669   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.880227   17722 ssh_runner.go:195] Run: systemctl --version
	I0429 12:25:22.884904   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:25:23.083357   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:25:23.083479   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:25:23.099007   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:25:23.099018   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.099124   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.114251   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:25:23.124072   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:25:23.133303   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.133353   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:25:23.142358   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.151274   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:25:23.160219   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.169150   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:25:23.178221   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:25:23.187855   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:25:23.200187   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:25:23.215042   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:25:23.229984   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:25:23.239505   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.358187   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:25:23.377452   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.377536   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:25:23.390115   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.402150   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:25:23.418123   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.430203   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.441283   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:25:23.508378   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.519755   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.536893   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:25:23.539939   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:25:23.548090   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:25:23.561479   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:25:23.659092   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:25:23.770465   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.770541   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:25:23.785669   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.879272   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:26.127907   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.248642004s)
	I0429 12:25:26.127971   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 12:25:26.139541   17722 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0429 12:25:26.153363   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.163941   17722 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 12:25:26.260524   17722 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 12:25:26.359754   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.473092   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 12:25:26.486634   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.497861   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.593975   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 12:25:26.650473   17722 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 12:25:26.650551   17722 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 12:25:26.655442   17722 start.go:562] Will wait 60s for crictl version
	I0429 12:25:26.655497   17722 ssh_runner.go:195] Run: which crictl
	I0429 12:25:26.658642   17722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:25:26.687124   17722 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 12:25:26.687194   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.703960   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.743708   17722 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 12:25:26.743769   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:26.744198   17722 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0429 12:25:26.748889   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:26.759848   17722 kubeadm.go:877] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:25:26.759910   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:26.759963   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:26.778002   17722 docker.go:685] Got preloaded images: 
	I0429 12:25:26.778015   17722 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 12:25:26.778063   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:26.792529   17722 ssh_runner.go:195] Run: which lz4
	I0429 12:25:26.795405   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 12:25:26.795520   17722 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0429 12:25:26.798617   17722 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:25:26.798635   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 12:25:27.765888   17722 docker.go:649] duration metric: took 970.420369ms to copy over tarball
	I0429 12:25:27.765960   17722 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:25:30.567839   17722 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801895501s)
	I0429 12:25:30.567854   17722 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:25:30.595797   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:30.604930   17722 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 12:25:30.618739   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:30.724313   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:32.998664   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.274358101s)
	I0429 12:25:32.998761   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:33.010141   17722 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 12:25:33.010164   17722 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:25:33.010175   17722 kubeadm.go:928] updating node { 192.169.0.53 8443 v1.30.0 docker true true} ...
	I0429 12:25:33.010258   17722 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:25:33.010327   17722 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 12:25:33.028263   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:33.028276   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:33.028288   17722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:25:33.028307   17722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.53 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:25:33.028401   17722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:25:33.028424   17722 kube-vip.go:115] generating kube-vip config ...
	I0429 12:25:33.028473   17722 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:25:33.041730   17722 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:25:33.041824   17722 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:25:33.041883   17722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:25:33.049658   17722 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:25:33.049723   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 12:25:33.057238   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 12:25:33.070533   17722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:25:33.084130   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 12:25:33.098615   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0429 12:25:33.112003   17722 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:25:33.115112   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:33.124648   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:33.220871   17722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:25:33.235322   17722 certs.go:68] Setting up /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000 for IP: 192.169.0.53
	I0429 12:25:33.235337   17722 certs.go:194] generating shared ca certs ...
	I0429 12:25:33.235350   17722 certs.go:226] acquiring lock for ca certs: {Name:mk18b421bd78dac9f18e716e2a7f75e3dd2eab02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.235540   17722 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key
	I0429 12:25:33.235613   17722 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key
	I0429 12:25:33.235624   17722 certs.go:256] generating profile certs ...
	I0429 12:25:33.235671   17722 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key
	I0429 12:25:33.235685   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0429 12:25:33.339009   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt ...
	I0429 12:25:33.339025   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt: {Name:mkb1110c9ecfb391fd480dbb43740124cada1f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339372   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key ...
	I0429 12:25:33.339380   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key: {Name:mkd77fb4d6c3d30f48c31a52650895ec059a5da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339601   17722 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9
	I0429 12:25:33.339619   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.53 192.169.0.254]
	I0429 12:25:33.423656   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 ...
	I0429 12:25:33.423669   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9: {Name:mk814e943097b224c38c531359e60f0e011c83a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.454432   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 ...
	I0429 12:25:33.454489   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9: {Name:mkf400b8f0ffe83cfed5eaf2cbd6dc5c6a80d173 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.455578   17722 certs.go:381] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt
	I0429 12:25:33.455838   17722 certs.go:385] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key
	I0429 12:25:33.456077   17722 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key
	I0429 12:25:33.456094   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0429 12:25:33.747139   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0429 12:25:33.747160   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mk16d91ff8fca22baa28a0d26524b0eab36376d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747480   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key ...
	I0429 12:25:33.747491   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key: {Name:mk54d72d806a4c855d8c1362244d32326e1573f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747702   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:25:33.747734   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:25:33.747753   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:25:33.747772   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:25:33.747790   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:25:33.747808   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:25:33.747826   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:25:33.747844   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:25:33.747942   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem (1338 bytes)
	W0429 12:25:33.747991   17722 certs.go:480] ignoring /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0429 12:25:33.747999   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 12:25:33.748029   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem (1082 bytes)
	I0429 12:25:33.748058   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:25:33.748086   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem (1675 bytes)
	I0429 12:25:33.748159   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:33.748192   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:33.748211   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem -> /usr/share/ca-certificates/16301.pem
	I0429 12:25:33.748229   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /usr/share/ca-certificates/163012.pem
	I0429 12:25:33.748622   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:25:33.769199   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 12:25:33.789288   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:25:33.809631   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:25:33.829028   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 12:25:33.848280   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:25:33.867787   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:25:33.886985   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:25:33.908803   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:25:33.936146   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0429 12:25:33.955589   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0429 12:25:33.976759   17722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:25:33.990981   17722 ssh_runner.go:195] Run: openssl version
	I0429 12:25:33.995277   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0429 12:25:34.003661   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007097   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 19:19 /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007137   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.011417   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:25:34.019724   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:25:34.027921   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031378   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031414   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.035640   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:25:34.044197   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0429 12:25:34.052743   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056299   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 19:19 /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056347   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.060683   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0429 12:25:34.069054   17722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:25:34.072258   17722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:25:34.072301   17722 kubeadm.go:391] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:34.072393   17722 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 12:25:34.083740   17722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:25:34.091194   17722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:25:34.098727   17722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:25:34.106172   17722 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:25:34.106181   17722 kubeadm.go:156] found existing configuration files:
	
	I0429 12:25:34.106220   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:25:34.113138   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:25:34.113180   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:25:34.120473   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:25:34.127438   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:25:34.127477   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:25:34.134756   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.141743   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:25:34.141780   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.149046   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:25:34.156264   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:25:34.156318   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:25:34.163835   17722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:25:34.237910   17722 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:25:34.237955   17722 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:25:34.323023   17722 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:25:34.323108   17722 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:25:34.323194   17722 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:25:34.504766   17722 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:25:34.536903   17722 out.go:204]   - Generating certificates and keys ...
	I0429 12:25:34.536959   17722 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:25:34.537016   17722 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:25:34.879309   17722 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:25:34.943072   17722 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:25:35.013417   17722 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:25:35.182918   17722 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:25:35.261483   17722 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:25:35.261590   17722 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.419895   17722 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:25:35.420010   17722 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.479554   17722 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:25:35.607018   17722 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:25:35.903671   17722 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:25:35.904031   17722 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:25:36.142024   17722 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:25:36.288455   17722 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:25:36.556621   17722 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:25:36.634522   17722 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:25:36.899304   17722 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:25:36.899717   17722 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:25:36.902090   17722 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:25:36.923478   17722 out.go:204]   - Booting up control plane ...
	I0429 12:25:36.923546   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:25:36.923600   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:25:36.923664   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:25:36.923740   17722 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:25:36.923808   17722 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:25:36.923851   17722 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:25:37.025954   17722 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:25:37.026032   17722 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:25:38.035770   17722 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.010128476s
	I0429 12:25:38.035905   17722 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:25:43.764081   17722 kubeadm.go:309] [api-check] The API server is healthy after 5.731795501s
	I0429 12:25:43.772560   17722 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:25:43.789869   17722 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:25:43.805688   17722 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:25:43.805836   17722 kubeadm.go:309] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:25:43.811946   17722 kubeadm.go:309] [bootstrap-token] Using token: m6uvir.d4h53m26b1o2593q
	I0429 12:25:43.849413   17722 out.go:204]   - Configuring RBAC rules ...
	I0429 12:25:43.849574   17722 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:25:43.852729   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:25:43.892587   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:25:43.894879   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:25:43.896753   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:25:43.898705   17722 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:25:44.168425   17722 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:25:44.581231   17722 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:25:45.167548   17722 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:25:45.168144   17722 kubeadm.go:309] 
	I0429 12:25:45.168196   17722 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:25:45.168206   17722 kubeadm.go:309] 
	I0429 12:25:45.168263   17722 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:25:45.168268   17722 kubeadm.go:309] 
	I0429 12:25:45.168291   17722 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:25:45.168346   17722 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:25:45.168395   17722 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:25:45.168404   17722 kubeadm.go:309] 
	I0429 12:25:45.168448   17722 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:25:45.168454   17722 kubeadm.go:309] 
	I0429 12:25:45.168486   17722 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:25:45.168492   17722 kubeadm.go:309] 
	I0429 12:25:45.168531   17722 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:25:45.168592   17722 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:25:45.168639   17722 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:25:45.168643   17722 kubeadm.go:309] 
	I0429 12:25:45.168723   17722 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:25:45.168792   17722 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:25:45.168798   17722 kubeadm.go:309] 
	I0429 12:25:45.168870   17722 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.168957   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 \
	I0429 12:25:45.168980   17722 kubeadm.go:309] 	--control-plane 
	I0429 12:25:45.168984   17722 kubeadm.go:309] 
	I0429 12:25:45.169055   17722 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:25:45.169064   17722 kubeadm.go:309] 
	I0429 12:25:45.169140   17722 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.169225   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 
	I0429 12:25:45.169327   17722 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:25:45.169339   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:45.169343   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:45.192418   17722 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 12:25:45.250402   17722 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 12:25:45.254278   17722 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 12:25:45.254288   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 12:25:45.273385   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 12:25:45.493620   17722 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:25:45.493687   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.493694   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_04_29T12_25_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0429 12:25:45.612967   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.644835   17722 ops.go:34] apiserver oom_adj: -16
	I0429 12:25:46.113178   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:46.613113   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.114140   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.613848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.113291   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.613593   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.113154   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.613813   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.113663   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.613520   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.113510   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.613048   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.113848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.614240   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.113945   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.615040   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.113136   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.614312   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.113432   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.614732   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.113281   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.613014   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.114177   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.613453   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.726340   17722 kubeadm.go:1107] duration metric: took 12.232843816s to wait for elevateKubeSystemPrivileges
	W0429 12:25:57.726368   17722 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:25:57.726373   17722 kubeadm.go:393] duration metric: took 23.654349307s to StartCluster
	I0429 12:25:57.726388   17722 settings.go:142] acquiring lock: {Name:mk4619a2e41059be70c5979af83844e654a88068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.726480   17722 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.726968   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/kubeconfig: {Name:mkea524d41d79b030c40cdcb50fe58e264b35b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.727247   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:25:57.727263   17722 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:57.727278   17722 start.go:240] waiting for startup goroutines ...
	I0429 12:25:57.727289   17722 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:25:57.727325   17722 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0429 12:25:57.727331   17722 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0429 12:25:57.727351   17722 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0429 12:25:57.727356   17722 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0429 12:25:57.727369   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.727424   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:57.727601   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727602   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727616   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.727617   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.736886   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61984
	I0429 12:25:57.737295   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.737678   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.737690   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.737981   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61986
	I0429 12:25:57.738074   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.738331   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.738533   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.738579   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.738658   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.738671   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.739264   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.739478   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.739646   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.739702   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.741554   17722 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.741775   17722 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdd3cee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:25:57.742264   17722 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 12:25:57.742421   17722 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0429 12:25:57.742443   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.742660   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.742677   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.747696   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61988
	I0429 12:25:57.748147   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.748482   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.748498   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.748745   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.748874   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.748975   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.749041   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.750035   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.788044   17722 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:25:57.751748   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61990
	I0429 12:25:57.825563   17722 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.825579   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:25:57.825595   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.825776   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.825980   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.826026   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.826183   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.826334   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.826500   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.826516   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.826839   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.827237   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.827262   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.835538   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:25:57.836072   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61993
	I0429 12:25:57.836372   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.836712   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.836731   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.836941   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.837051   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.837149   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.837235   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.838190   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.838373   17722 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:57.838382   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:25:57.838392   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.838527   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.838615   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.838701   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.838792   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.921992   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.963950   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:58.239892   17722 start.go:946] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0429 12:25:58.350022   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350036   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350091   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350105   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350223   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350224   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350239   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350250   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350258   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350278   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350288   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350302   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350302   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350309   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350428   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350439   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350445   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350487   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350497   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350532   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350553   17722 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 12:25:58.350558   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.350566   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.350571   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.356148   17722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:25:58.356671   17722 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 12:25:58.356684   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.356690   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.356695   17722 round_trippers.go:473]     Content-Type: application/json
	I0429 12:25:58.356699   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.359632   17722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:25:58.359767   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.359777   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.359915   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.359935   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.359942   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.383304   17722 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 12:25:58.425404   17722 addons.go:505] duration metric: took 698.121896ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 12:25:58.425460   17722 start.go:245] waiting for cluster config update ...
	I0429 12:25:58.425487   17722 start.go:254] writing updated cluster config ...
	I0429 12:25:58.541992   17722 out.go:177] 
	I0429 12:25:58.562923   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:58.563008   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.583885   17722 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0429 12:25:58.624732   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:58.624767   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:58.624960   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:58.624976   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:58.625077   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.625802   17722 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:58.625882   17722 start.go:364] duration metric: took 63.962µs to acquireMachinesLock for "ha-431000-m02"
	I0429 12:25:58.625908   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:58.625983   17722 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0429 12:25:58.647279   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:58.647410   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:58.647441   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:58.657607   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61997
	I0429 12:25:58.657937   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:58.658289   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:58.658303   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:58.658522   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:58.658625   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:25:58.658714   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:25:58.658812   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:58.658829   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:58.658858   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:58.658895   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658904   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658941   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:58.658966   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658976   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658987   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:58.658993   17722 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0429 12:25:58.659060   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.659082   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:25:58.684517   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:58.684548   17722 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0429 12:25:58.684719   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.685030   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.684708   17751 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:58.685090   17722 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:58.880461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.880334   17751 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa...
	I0429 12:25:58.972148   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972072   17751 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0429 12:25:58.972171   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0429 12:25:58.972194   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0429 12:25:58.972426   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972393   17751 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02 ...
	I0429 12:25:59.330149   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.330167   17722 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0429 12:25:59.330184   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID 9365fc26-c2e1-4d51-b9c4-d17102d62f4c
	I0429 12:25:59.355313   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC ca:19:33:23:af:66
	I0429 12:25:59.355328   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:59.355356   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355382   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355419   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9365fc26-c2e1-4d51-b9c4-d17102d62f4c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/
machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:59.355452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9365fc26-c2e1-4d51-b9c4-d17102d62f4c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:59.355465   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:59.358434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Pid is 17752
	I0429 12:25:59.359551   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0429 12:25:59.359562   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.359638   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:25:59.360574   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:25:59.360681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:59.360697   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:59.360720   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:59.360739   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:59.360752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:59.360762   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:59.360776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:59.360790   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:59.360800   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:59.360809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:59.360829   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:59.360841   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:59.360856   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:59.360868   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:59.360879   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:59.360894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:59.360903   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:59.360932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:59.360940   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:59.360949   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:59.360958   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:59.360966   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:59.360973   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:59.360981   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:59.360987   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:59.360995   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:59.361001   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:59.361007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:59.361029   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:59.361043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:59.361055   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:59.361062   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:59.361069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:59.361077   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:59.361085   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:59.361094   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:59.361102   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:59.361111   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:59.361122   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:59.361132   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:59.361139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:59.361146   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:59.361155   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:59.361168   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:59.361180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:59.361200   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:59.361215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:59.361230   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:59.361243   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:59.361254   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:59.361267   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:59.361281   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:59.361294   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:59.365988   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:59.374154   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:59.374983   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.375007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.375049   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.375071   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.757805   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:59.757822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:59.872427   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.872442   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.872451   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.872459   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.873348   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:59.873361   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:26:01.362461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0429 12:26:01.362478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:01.362589   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:01.363375   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:01.363463   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:01.363472   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:01.363482   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:01.363495   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:01.363505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:01.363523   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:01.363540   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:01.363555   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:01.363564   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:01.363570   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:01.363580   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:01.363585   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:01.363594   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:01.363601   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:01.363606   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:01.363615   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:01.363620   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:01.363627   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:01.363641   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:01.363649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:01.363656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:01.363664   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:01.363671   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:01.363678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:01.363685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:01.363691   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:01.363698   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:01.363706   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:01.363715   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:01.363729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:01.363745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:01.363752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:01.363758   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:01.363763   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:01.363770   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:01.363779   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:01.363786   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:01.363791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:01.363806   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:01.363819   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:01.363827   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:01.363833   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:01.363839   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:01.363847   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:01.363854   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:01.363862   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:01.363876   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:01.363887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:01.363897   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:01.363909   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:01.363929   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:01.363942   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:01.363951   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:03.364674   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0429 12:26:03.364689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:03.364761   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:03.365538   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:03.365618   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:03.365628   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:03.365649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:03.365656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:03.365663   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:03.365668   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:03.365681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:03.365689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:03.365701   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:03.365714   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:03.365721   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:03.365734   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:03.365745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:03.365754   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:03.365760   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:03.365766   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:03.365776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:03.365783   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:03.365789   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:03.365795   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:03.365802   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:03.365809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:03.365820   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:03.365828   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:03.365835   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:03.365840   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:03.365850   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:03.365857   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:03.365871   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:03.365880   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:03.365887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:03.365894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:03.365906   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:03.365922   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:03.365930   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:03.365939   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:03.365955   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:03.365964   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:03.365971   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:03.365979   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:03.365986   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:03.365993   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:03.366000   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:03.366008   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:03.366015   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:03.366022   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:03.366028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:03.366035   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:03.366043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:03.366050   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:03.366067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:03.366079   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:03.366088   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:05.177799   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0429 12:26:05.177845   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0429 12:26:05.177853   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0429 12:26:05.201640   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0429 12:26:05.367080   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0429 12:26:05.367108   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:05.367266   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:05.368694   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:05.368836   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:05.368855   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:05.368872   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:05.368902   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:05.368917   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:05.368932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:05.368944   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:05.368959   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:05.368998   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:05.369028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:05.369054   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:05.369067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:05.369081   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:05.369089   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:05.369107   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:05.369129   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:05.369139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:05.369150   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:05.369170   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:05.369187   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:05.369198   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:05.369209   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:05.369224   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:05.369237   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:05.369248   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:05.369260   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:05.369269   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:05.369280   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:05.369290   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:05.369300   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:05.369309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:05.369317   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:05.369342   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:05.369360   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:05.369371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:05.369383   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:05.369391   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:05.369402   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:05.369412   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:05.369420   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:05.369429   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:05.369437   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:05.369446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:05.369456   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:05.369467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:05.369476   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:05.369491   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:05.369505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:05.369516   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:05.369528   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:05.369539   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:05.369549   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:05.369556   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:07.369177   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0429 12:26:07.369192   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:07.369292   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:07.370069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:07.370166   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:07.370180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:07.370193   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:07.370204   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:07.370215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:07.370228   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:07.370247   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:07.370259   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:07.370274   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:07.370287   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:07.370296   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:07.370304   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:07.370309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:07.370315   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:07.370344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:07.370354   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:07.370363   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:07.370371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:07.370384   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:07.370397   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:07.370403   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:07.370411   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:07.370418   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:07.370423   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:07.370438   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:07.370446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:07.370452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:07.370467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:07.370478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:07.370508   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:07.370534   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:07.370550   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:07.370567   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:07.370582   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:07.370595   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:07.370607   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:07.370629   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:07.370639   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:07.370646   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:07.370659   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:07.370669   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:07.370678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:07.370685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:07.370695   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:07.370729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:07.370740   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:07.370745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:07.370768   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:07.370775   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:07.370785   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:07.370791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:07.370807   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:07.370822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:09.370416   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0429 12:26:09.370434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.370529   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.371299   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:09.371370   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0429 12:26:09.371380   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:ca:19:33:23:af:66 ID:1,ca:19:33:23:af:66 Lease:0x663145d0}
	I0429 12:26:09.371387   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: ca:19:33:23:af:66
	I0429 12:26:09.371394   17722 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.54
	I0429 12:26:09.371443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:09.371998   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372094   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372184   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:26:09.372192   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0429 12:26:09.372282   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.372344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.373081   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:26:09.373088   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:26:09.373091   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:26:09.373096   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:09.373185   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:09.373265   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373344   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373427   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:09.373540   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:09.373715   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:09.373722   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:26:09.398851   17722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0429 12:26:12.461040   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.461055   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:26:12.461061   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.461182   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.461292   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461377   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461470   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.461607   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.461746   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.461754   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:26:12.523397   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:26:12.523438   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:26:12.523445   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:26:12.523450   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523578   17722 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0429 12:26:12.523587   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523662   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.523743   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.523829   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.523933   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.524027   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.524160   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.524319   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.524328   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0429 12:26:12.595067   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0429 12:26:12.595081   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.595219   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.595311   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595396   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595487   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.595609   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.595751   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.595763   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:26:12.665199   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.665226   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:26:12.665245   17722 buildroot.go:174] setting up certificates
	I0429 12:26:12.665255   17722 provision.go:84] configureAuth start
	I0429 12:26:12.665262   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.665398   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:12.665483   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.665568   17722 provision.go:143] copyHostCerts
	I0429 12:26:12.665595   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665640   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:26:12.665646   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665779   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:26:12.665976   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666005   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:26:12.666009   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666094   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:26:12.666237   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666265   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:26:12.666269   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666334   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:26:12.666486   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.54 ha-431000-m02 localhost minikube]
	I0429 12:26:12.788915   17722 provision.go:177] copyRemoteCerts
	I0429 12:26:12.788965   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:26:12.788978   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.789131   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.789238   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.789318   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.789407   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:12.827773   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:26:12.827848   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:26:12.847569   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:26:12.847650   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:26:12.867106   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:26:12.867164   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:26:12.886794   17722 provision.go:87] duration metric: took 221.533222ms to configureAuth
	I0429 12:26:12.886806   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:26:12.886945   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:26:12.886958   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:12.887088   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.887189   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.887288   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887445   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.887559   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.887688   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.887695   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:26:12.949449   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:26:12.960507   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:26:12.960630   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:26:12.960650   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.960811   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.960918   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961031   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961144   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.961314   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.961496   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.961559   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.53"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:26:13.034441   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.53
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:26:13.034459   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:13.034584   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:13.034670   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034771   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034868   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:13.034990   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:13.035131   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:13.035143   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:26:14.527695   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:26:14.527713   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:26:14.527719   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0429 12:26:14.527859   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:26:14.527867   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:26:14.527872   17722 client.go:171] duration metric: took 15.869221462s to LocalClient.Create
	I0429 12:26:14.527883   17722 start.go:167] duration metric: took 15.869255705s to libmachine.API.Create "ha-431000"
	I0429 12:26:14.527888   17722 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0429 12:26:14.527895   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:26:14.527905   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.528059   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:26:14.528083   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.528190   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.528277   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.528371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.528462   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.571874   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:26:14.575715   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:26:14.575725   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:26:14.575820   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:26:14.575979   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:26:14.575985   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:26:14.576144   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:26:14.585193   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:26:14.617800   17722 start.go:296] duration metric: took 89.904272ms for postStartSetup
	I0429 12:26:14.617830   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:14.618443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.618595   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:26:14.619362   17722 start.go:128] duration metric: took 15.993555623s to createHost
	I0429 12:26:14.619378   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.619471   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.619554   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619637   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619749   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.619852   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:14.619986   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:14.619993   17722 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0429 12:26:14.684608   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418773.901729701
	
	I0429 12:26:14.684618   17722 fix.go:216] guest clock: 1714418773.901729701
	I0429 12:26:14.684638   17722 fix.go:229] Guest: 2024-04-29 12:26:13.901729701 -0700 PDT Remote: 2024-04-29 12:26:14.619372 -0700 PDT m=+66.737316646 (delta=-717.642299ms)
	I0429 12:26:14.684650   17722 fix.go:200] guest clock delta is within tolerance: -717.642299ms
	I0429 12:26:14.684654   17722 start.go:83] releasing machines lock for "ha-431000-m02", held for 16.058950363s
	I0429 12:26:14.684671   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.684806   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.706148   17722 out.go:177] * Found network options:
	I0429 12:26:14.727219   17722 out.go:177]   - NO_PROXY=192.169.0.53
	W0429 12:26:14.749052   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.749119   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.749950   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750211   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750321   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:26:14.750357   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0429 12:26:14.750432   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.750551   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:26:14.750572   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.750573   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750783   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750814   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751054   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751092   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751333   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.751365   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751506   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0429 12:26:14.787217   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:26:14.787277   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:26:14.848201   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:26:14.848227   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:14.848358   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:14.865009   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:26:14.874483   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:26:14.883536   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:26:14.883585   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:26:14.892833   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.901853   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:26:14.910642   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.919451   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:26:14.928628   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:26:14.937504   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:26:14.946508   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:26:14.955539   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:26:14.963731   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:26:14.971817   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.075369   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:26:15.095140   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:15.095227   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:26:15.114227   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.131639   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:26:15.152206   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.163545   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.173829   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:26:15.196676   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.207120   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:15.222241   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:26:15.225235   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:26:15.232398   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:26:15.245911   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:26:15.342741   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:26:15.443507   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:26:15.443528   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:26:15.456917   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.554009   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:27:16.592140   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.038821319s)
	I0429 12:27:16.592205   17722 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 12:27:16.627966   17722 out.go:177] 
	W0429 12:27:16.650769   17722 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 19:26:12 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.562442558Z" level=info msg="Starting up"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.563076350Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.567928747Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=527
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.583689216Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597320910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597367066Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597410988Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597446593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597506031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597540413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597805772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597892122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597933427Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597944227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598009110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598199858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599844343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599884447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599970866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600003444Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600098323Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600162313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600172836Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602841911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602899413Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602913390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602923442Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602932201Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603001707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603176643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603272136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603306202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603317268Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603331817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603343600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603351894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603363377Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603373358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603384153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603392284Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603399425Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603411790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603424697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603433133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603441352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603448955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603456927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603465082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603473249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603488023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603499629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603507180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603514591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603523215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603533846Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603547413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603558028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603566468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603593185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603603060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603610561Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603617284Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603705135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603740552Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603750151Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603879153Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603936791Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603967392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603999139Z" level=info msg="containerd successfully booted in 0.021874s"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.594997677Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.598794560Z" level=info msg="Loading containers: start."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.706853403Z" level=info msg="Loading containers: done."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713747539Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713886174Z" level=info msg="Daemon has completed initialization"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.742027280Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 19:26:13 ha-431000-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.744424276Z" level=info msg="API listen on [::]:2376"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.784502897Z" level=info msg="Processing signal 'terminated'"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785439200Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785771501Z" level=info msg="Daemon shutdown complete"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785832350Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785935215Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 19:26:14 ha-431000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 dockerd[864]: time="2024-04-29T19:26:15.839252866Z" level=info msg="Starting up"
	Apr 29 19:27:16 ha-431000-m02 dockerd[864]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 19:26:12 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.562442558Z" level=info msg="Starting up"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.563076350Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.567928747Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=527
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.583689216Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597320910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597367066Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597410988Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597446593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597506031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597540413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597805772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597892122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597933427Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597944227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598009110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598199858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599844343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599884447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599970866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600003444Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600098323Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600162313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600172836Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602841911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602899413Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602913390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602923442Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602932201Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603001707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603176643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603272136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603306202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603317268Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603331817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603343600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603351894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603363377Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603373358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603384153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603392284Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603399425Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603411790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603424697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603433133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603441352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603448955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603456927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603465082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603473249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603488023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603499629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603507180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603514591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603523215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603533846Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603547413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603558028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603566468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603593185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603603060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603610561Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603617284Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603705135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603740552Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603750151Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603879153Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603936791Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603967392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603999139Z" level=info msg="containerd successfully booted in 0.021874s"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.594997677Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.598794560Z" level=info msg="Loading containers: start."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.706853403Z" level=info msg="Loading containers: done."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713747539Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713886174Z" level=info msg="Daemon has completed initialization"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.742027280Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 19:26:13 ha-431000-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.744424276Z" level=info msg="API listen on [::]:2376"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.784502897Z" level=info msg="Processing signal 'terminated'"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785439200Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785771501Z" level=info msg="Daemon shutdown complete"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785832350Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785935215Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 19:26:14 ha-431000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 dockerd[864]: time="2024-04-29T19:26:15.839252866Z" level=info msg="Starting up"
	Apr 29 19:27:16 ha-431000-m02 dockerd[864]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 12:27:16.650883   17722 out.go:239] * 
	* 
	W0429 12:27:16.652130   17722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 12:27:16.735182   17722 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:103: failed to fresh-start ha (multi-control plane) cluster. args "out/minikube-darwin-amd64 start -p ha-431000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/StartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/StartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.227412779s)
helpers_test.go:252: TestMultiControlPlane/serial/StartCluster logs: 
-- stdout --
	
	==> Audit <==
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	|    Command     |                                                        Args                                                        |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| ssh            | functional-692000 ssh sudo                                                                                         | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | umount -f /mount-9p                                                                                                |                   |         |         |                     |                     |
	| mount          | -p functional-692000                                                                                               | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1487767947/001:/mount1 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-692000                                                                                               | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1487767947/001:/mount2 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| mount          | -p functional-692000                                                                                               | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1487767947/001:/mount3 |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| ssh            | functional-692000 ssh findmnt                                                                                      | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-692000 ssh findmnt                                                                                      | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | -T /mount1                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-692000 ssh findmnt                                                                                      | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | -T /mount2                                                                                                         |                   |         |         |                     |                     |
	| ssh            | functional-692000 ssh findmnt                                                                                      | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | -T /mount3                                                                                                         |                   |         |         |                     |                     |
	| mount          | -p functional-692000                                                                                               | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | --kill=true                                                                                                        |                   |         |         |                     |                     |
	| start          | -p functional-692000                                                                                               | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=hyperkit                                                                                                  |                   |         |         |                     |                     |
	| start          | -p functional-692000 --dry-run                                                                                     | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	|                | --driver=hyperkit                                                                                                  |                   |         |         |                     |                     |
	| start          | -p functional-692000                                                                                               | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | --dry-run --memory                                                                                                 |                   |         |         |                     |                     |
	|                | 250MB --alsologtostderr                                                                                            |                   |         |         |                     |                     |
	|                | --driver=hyperkit                                                                                                  |                   |         |         |                     |                     |
	| dashboard      | --url --port 36195                                                                                                 | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:25 PDT |
	|                | -p functional-692000                                                                                               |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=1                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-692000                                                                                                  | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-692000                                                                                                  | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| update-context | functional-692000                                                                                                  | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | update-context                                                                                                     |                   |         |         |                     |                     |
	|                | --alsologtostderr -v=2                                                                                             |                   |         |         |                     |                     |
	| image          | functional-692000                                                                                                  | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | image ls --format short                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-692000                                                                                                  | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | image ls --format yaml                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| ssh            | functional-692000 ssh pgrep                                                                                        | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT |                     |
	|                | buildkitd                                                                                                          |                   |         |         |                     |                     |
	| image          | functional-692000 image build -t                                                                                   | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | localhost/my-image:functional-692000                                                                               |                   |         |         |                     |                     |
	|                | testdata/build --alsologtostderr                                                                                   |                   |         |         |                     |                     |
	| image          | functional-692000 image ls                                                                                         | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	| image          | functional-692000                                                                                                  | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | image ls --format json                                                                                             |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| image          | functional-692000                                                                                                  | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:24 PDT | 29 Apr 24 12:24 PDT |
	|                | image ls --format table                                                                                            |                   |         |         |                     |                     |
	|                | --alsologtostderr                                                                                                  |                   |         |         |                     |                     |
	| delete         | -p functional-692000                                                                                               | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:25 PDT | 29 Apr 24 12:25 PDT |
	| start          | -p ha-431000 --wait=true                                                                                           | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:25 PDT |                     |
	|                | --memory=2200 --ha                                                                                                 |                   |         |         |                     |                     |
	|                | -v=7 --alsologtostderr                                                                                             |                   |         |         |                     |                     |
	|                | --driver=hyperkit                                                                                                  |                   |         |         |                     |                     |
	|----------------|--------------------------------------------------------------------------------------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:25:07
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:25:07.932524   17722 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:25:07.932931   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.932939   17722 out.go:304] Setting ErrFile to fd 2...
	I0429 12:25:07.932945   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.933219   17722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:25:07.935193   17722 out.go:298] Setting JSON to false
	I0429 12:25:07.966866   17722 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10477,"bootTime":1714408230,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 12:25:07.967005   17722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:25:07.990175   17722 out.go:177] * [ha-431000] minikube v1.33.0 on Darwin 14.4.1
	I0429 12:25:08.073413   17722 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 12:25:08.048486   17722 notify.go:220] Checking for updates...
	I0429 12:25:08.160225   17722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:08.223714   17722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 12:25:08.245577   17722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:25:08.266426   17722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.296555   17722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:25:08.318007   17722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:25:08.348195   17722 out.go:177] * Using the hyperkit driver based on user configuration
	I0429 12:25:08.391518   17722 start.go:297] selected driver: hyperkit
	I0429 12:25:08.391552   17722 start.go:901] validating driver "hyperkit" against <nil>
	I0429 12:25:08.391574   17722 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:25:08.395979   17722 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.396124   17722 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18774-15855/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0429 12:25:08.404644   17722 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0429 12:25:08.408717   17722 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.408755   17722 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0429 12:25:08.408783   17722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:25:08.409041   17722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:25:08.409091   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:08.409101   17722 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 12:25:08.409113   17722 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:25:08.409179   17722 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:08.409294   17722 iso.go:125] acquiring lock: {Name:mk81eeca500d1c63233c315450a8b707db3bb810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.451413   17722 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0429 12:25:08.472458   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:08.472552   17722 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 12:25:08.472575   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:08.472808   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:08.472832   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:08.473307   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:08.473347   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json: {Name:mk35a218377c5d37033d174261b78cb0d686fe80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:08.474156   17722 start.go:360] acquireMachinesLock for ha-431000: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:08.474283   17722 start.go:364] duration metric: took 104.507µs to acquireMachinesLock for "ha-431000"
	I0429 12:25:08.474332   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:08.474420   17722 start.go:125] createHost starting for "" (driver="hyperkit")
	I0429 12:25:08.516479   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:08.516777   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.516851   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:08.526733   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61961
	I0429 12:25:08.527153   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:08.527581   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:08.527592   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:08.527830   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:08.527946   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:08.528043   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:08.528148   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:08.528173   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:08.528213   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:08.528263   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528280   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528335   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:08.528371   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528388   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528404   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:08.528414   17722 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0429 12:25:08.528488   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.528641   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:08.529147   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:08.529155   17722 main.go:141] libmachine: (ha-431000) Calling .Create
	I0429 12:25:08.529220   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.529331   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.529213   17730 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.529403   17722 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:08.704997   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.704900   17730 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa...
	I0429 12:25:08.942793   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.942700   17730 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0429 12:25:08.942814   17722 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0429 12:25:08.942822   17722 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0429 12:25:08.943194   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.943155   17730 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000 ...
	I0429 12:25:09.300525   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.300543   17722 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid
	I0429 12:25:09.300607   17722 main.go:141] libmachine: (ha-431000) DBG | Using UUID 24bdec35-56f8-46dd-965c-91e20eb4b3f3
	I0429 12:25:09.409075   17722 main.go:141] libmachine: (ha-431000) DBG | Generated MAC 12:79:5a:6e:b2:5a
	I0429 12:25:09.409094   17722 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:09.409125   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409150   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409187   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "24bdec35-56f8-46dd-965c-91e20eb4b3f3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:09.409213   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 24bdec35-56f8-46dd-965c-91e20eb4b3f3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:09.409223   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:09.412103   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Pid is 17733
	I0429 12:25:09.412560   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0429 12:25:09.412590   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.412663   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:09.413636   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:09.413744   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:09.413764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:09.413783   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:09.413800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:09.413811   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:09.413824   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:09.413850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:09.413862   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:09.413875   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:09.413883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:09.413914   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:09.413927   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:09.413957   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:09.413970   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:09.413986   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:09.413992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:09.414001   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:09.414010   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:09.414024   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:09.414041   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:09.414056   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:09.414068   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:09.414082   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:09.414098   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:09.414114   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:09.414144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:09.414157   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:09.414169   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:09.414178   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:09.414187   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:09.414196   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:09.414206   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:09.414215   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:09.414231   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:09.414248   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:09.414272   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:09.414288   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:09.414300   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:09.414315   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:09.414327   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:09.414339   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:09.414350   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:09.414363   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:09.414382   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:09.414397   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:09.414414   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:09.414428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:09.414441   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:09.414455   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:09.414467   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:09.414490   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:09.414509   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:09.419450   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:09.470495   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:09.471088   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.471110   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.471118   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.471128   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.850480   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:09.850496   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:09.966742   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.966763   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.966770   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.966776   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.967629   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:09.967639   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:25:11.414538   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0429 12:25:11.414553   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:11.414604   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:11.415399   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:11.415486   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:11.415495   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:11.415504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:11.415511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:11.415518   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:11.415539   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:11.415546   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:11.415584   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:11.415593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:11.415606   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:11.415614   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:11.415624   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:11.415633   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:11.415641   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:11.415647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:11.415654   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:11.415660   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:11.415667   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:11.415676   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:11.415682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:11.415689   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:11.415695   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:11.415702   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:11.415709   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:11.415714   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:11.415722   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:11.415729   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:11.415749   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:11.415755   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:11.415763   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:11.415771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:11.415782   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:11.415790   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:11.415798   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:11.415804   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:11.415809   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:11.415816   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:11.415822   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:11.415828   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:11.415836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:11.415844   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:11.415850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:11.415858   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:11.415864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:11.415872   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:11.415879   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:11.415886   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:11.415893   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:11.415898   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:11.415910   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:11.415924   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:11.415936   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:13.416614   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0429 12:25:13.416628   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:13.416689   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:13.417651   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:13.417739   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:13.417747   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:13.417765   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:13.417774   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:13.417791   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:13.417803   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:13.417810   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:13.417818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:13.417823   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:13.417830   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:13.417836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:13.417843   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:13.417849   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:13.417861   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:13.417870   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:13.417883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:13.417895   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:13.417902   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:13.417909   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:13.417922   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:13.417935   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:13.417949   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:13.417955   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:13.417969   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:13.417980   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:13.417992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:13.418008   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:13.418016   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:13.418023   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:13.418031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:13.418038   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:13.418044   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:13.418052   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:13.418059   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:13.418065   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:13.418083   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:13.418093   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:13.418101   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:13.418108   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:13.418116   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:13.418123   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:13.418130   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:13.418137   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:13.418144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:13.418151   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:13.418163   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:13.418181   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:13.418190   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:13.418198   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:13.418205   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:13.418212   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:13.418220   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:15.211585   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0429 12:25:15.211693   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0429 12:25:15.211704   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0429 12:25:15.236755   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0429 12:25:15.420156   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0429 12:25:15.420179   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:15.420320   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:15.421755   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:15.421973   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:15.421993   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:15.422007   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:15.422103   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:15.422117   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:15.422133   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:15.422156   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:15.422173   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:15.422185   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:15.422194   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:15.422211   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:15.422225   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:15.422235   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:15.422244   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:15.422263   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:15.422280   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:15.422297   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:15.422313   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:15.422336   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:15.422353   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:15.422369   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:15.422380   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:15.422393   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:15.422404   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:15.422412   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:15.422424   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:15.422432   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:15.422442   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:15.422456   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:15.422472   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:15.422492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:15.422511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:15.422535   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:15.422552   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:15.422563   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:15.422571   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:15.422581   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:15.422599   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:15.422609   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:15.422620   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:15.422628   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:15.422639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:15.422670   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:15.422682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:15.422700   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:15.422711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:15.422721   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:15.422732   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:15.422754   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:15.422771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:15.422797   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:15.422818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:17.422424   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0429 12:25:17.422440   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:17.422505   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:17.423336   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:17.423416   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:17.423428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:17.423437   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:17.423446   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:17.423453   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:17.423470   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:17.423477   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:17.423484   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:17.423492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:17.423498   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:17.423504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:17.423514   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:17.423519   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:17.423532   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:17.423542   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:17.423561   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:17.423570   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:17.423577   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:17.423593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:17.423600   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:17.423608   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:17.423615   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:17.423621   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:17.423627   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:17.423634   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:17.423639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:17.423647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:17.423653   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:17.423666   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:17.423674   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:17.423681   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:17.423696   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:17.423704   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:17.423711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:17.423718   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:17.423726   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:17.423733   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:17.423741   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:17.423750   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:17.423758   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:17.423764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:17.423770   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:17.423780   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:17.423792   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:17.423800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:17.423808   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:17.423815   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:17.423821   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:17.423838   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:17.423848   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:17.423855   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:17.423864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:19.424012   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0429 12:25:19.424040   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.424118   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.424931   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:19.425019   17722 main.go:141] libmachine: (ha-431000) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:19.425031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:19.425039   17722 main.go:141] libmachine: (ha-431000) DBG | Found match: 12:79:5a:6e:b2:5a
	I0429 12:25:19.425043   17722 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.53
	I0429 12:25:19.425100   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:19.425750   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425854   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425968   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:25:19.425977   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:19.426076   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.426132   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.426957   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:25:19.426968   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:25:19.426974   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:25:19.426979   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:19.427082   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:19.427181   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427279   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427395   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:19.427528   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:19.427739   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:19.427747   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:25:20.480764   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.480777   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:25:20.480782   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.480910   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.481005   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481084   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481169   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.481302   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.481440   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.481448   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:25:20.531217   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:25:20.531271   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:25:20.531277   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:25:20.531282   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531410   17722 buildroot.go:166] provisioning hostname "ha-431000"
	I0429 12:25:20.531421   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531516   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.531611   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.531708   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531810   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531912   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.532028   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.532159   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.532169   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0429 12:25:20.591606   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0429 12:25:20.591626   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.591764   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.591878   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.591967   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.592068   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.592207   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.592350   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.592361   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:25:20.648101   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.648121   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:25:20.648135   17722 buildroot.go:174] setting up certificates
	I0429 12:25:20.648144   17722 provision.go:84] configureAuth start
	I0429 12:25:20.648150   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.648287   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:20.648383   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.648467   17722 provision.go:143] copyHostCerts
	I0429 12:25:20.648496   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648561   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:25:20.648570   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648715   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:25:20.648916   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.648956   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:25:20.648961   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.649049   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:25:20.649197   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649236   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:25:20.649241   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649330   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:25:20.649473   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.53 ha-431000 localhost minikube]
	I0429 12:25:20.983440   17722 provision.go:177] copyRemoteCerts
	I0429 12:25:20.983549   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:25:20.983566   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.983728   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.983833   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.983973   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.984120   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:21.014934   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:25:21.015004   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:25:21.034390   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:25:21.034453   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 12:25:21.055272   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:25:21.055342   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:25:21.074968   17722 provision.go:87] duration metric: took 426.81697ms to configureAuth
	I0429 12:25:21.074981   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:25:21.075129   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:21.075141   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:21.075289   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.075381   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.075460   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075546   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075623   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.075741   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.075864   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.075872   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:25:21.124122   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:25:21.124135   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:25:21.124211   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:25:21.124224   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.124366   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.124464   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124564   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124654   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.124800   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.124944   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.124987   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:25:21.192987   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:25:21.193010   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.193155   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.193259   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193352   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193439   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.193576   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.193725   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.193737   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:25:22.738902   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:25:22.738918   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:25:22.738926   17722 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0429 12:25:22.739069   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:25:22.739077   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:25:22.739081   17722 client.go:171] duration metric: took 14.211068638s to LocalClient.Create
	I0429 12:25:22.739093   17722 start.go:167] duration metric: took 14.211111126s to libmachine.API.Create "ha-431000"
	I0429 12:25:22.739099   17722 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0429 12:25:22.739106   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:25:22.739115   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.739266   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:25:22.739278   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.739375   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.739470   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.739572   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.739667   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.770970   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:25:22.774135   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:25:22.774148   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:25:22.774248   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:25:22.774464   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:25:22.774471   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:25:22.774683   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:25:22.781634   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:22.801443   17722 start.go:296] duration metric: took 62.337848ms for postStartSetup
	I0429 12:25:22.801469   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:22.802046   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.802212   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:22.802533   17722 start.go:128] duration metric: took 14.328253816s to createHost
	I0429 12:25:22.802553   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.802653   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.802747   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802827   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802911   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.803023   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:22.803149   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:22.803156   17722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:25:22.853273   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418723.058219257
	
	I0429 12:25:22.853285   17722 fix.go:216] guest clock: 1714418723.058219257
	I0429 12:25:22.853294   17722 fix.go:229] Guest: 2024-04-29 12:25:23.058219257 -0700 PDT Remote: 2024-04-29 12:25:22.80254 -0700 PDT m=+14.919884293 (delta=255.679257ms)
	I0429 12:25:22.853306   17722 fix.go:200] guest clock delta is within tolerance: 255.679257ms
	I0429 12:25:22.853309   17722 start.go:83] releasing machines lock for "ha-431000", held for 14.379182495s
	I0429 12:25:22.853329   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853469   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.853555   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853914   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854029   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854103   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:25:22.854130   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854195   17722 ssh_runner.go:195] Run: cat /version.json
	I0429 12:25:22.854205   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854229   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854325   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854339   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854430   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854457   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854550   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.854559   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854669   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.880227   17722 ssh_runner.go:195] Run: systemctl --version
	I0429 12:25:22.884904   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:25:23.083357   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:25:23.083479   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:25:23.099007   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:25:23.099018   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.099124   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.114251   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:25:23.124072   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:25:23.133303   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.133353   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:25:23.142358   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.151274   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:25:23.160219   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.169150   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:25:23.178221   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:25:23.187855   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:25:23.200187   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:25:23.215042   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:25:23.229984   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:25:23.239505   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.358187   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:25:23.377452   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.377536   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:25:23.390115   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.402150   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:25:23.418123   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.430203   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.441283   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:25:23.508378   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.519755   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.536893   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:25:23.539939   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:25:23.548090   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:25:23.561479   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:25:23.659092   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:25:23.770465   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.770541   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:25:23.785669   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.879272   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:26.127907   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.248642004s)
	I0429 12:25:26.127971   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 12:25:26.139541   17722 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0429 12:25:26.153363   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.163941   17722 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 12:25:26.260524   17722 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 12:25:26.359754   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.473092   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 12:25:26.486634   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.497861   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.593975   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 12:25:26.650473   17722 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 12:25:26.650551   17722 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 12:25:26.655442   17722 start.go:562] Will wait 60s for crictl version
	I0429 12:25:26.655497   17722 ssh_runner.go:195] Run: which crictl
	I0429 12:25:26.658642   17722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:25:26.687124   17722 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 12:25:26.687194   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.703960   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.743708   17722 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 12:25:26.743769   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:26.744198   17722 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0429 12:25:26.748889   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:26.759848   17722 kubeadm.go:877] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:25:26.759910   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:26.759963   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:26.778002   17722 docker.go:685] Got preloaded images: 
	I0429 12:25:26.778015   17722 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 12:25:26.778063   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:26.792529   17722 ssh_runner.go:195] Run: which lz4
	I0429 12:25:26.795405   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 12:25:26.795520   17722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 12:25:26.798617   17722 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:25:26.798635   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 12:25:27.765888   17722 docker.go:649] duration metric: took 970.420369ms to copy over tarball
	I0429 12:25:27.765960   17722 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:25:30.567839   17722 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801895501s)
	I0429 12:25:30.567854   17722 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:25:30.595797   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:30.604930   17722 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 12:25:30.618739   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:30.724313   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:32.998664   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.274358101s)
	I0429 12:25:32.998761   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:33.010141   17722 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 12:25:33.010164   17722 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:25:33.010175   17722 kubeadm.go:928] updating node { 192.169.0.53 8443 v1.30.0 docker true true} ...
	I0429 12:25:33.010258   17722 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:25:33.010327   17722 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 12:25:33.028263   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:33.028276   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:33.028288   17722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:25:33.028307   17722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.53 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:25:33.028401   17722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:25:33.028424   17722 kube-vip.go:115] generating kube-vip config ...
	I0429 12:25:33.028473   17722 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:25:33.041730   17722 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:25:33.041824   17722 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:25:33.041883   17722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:25:33.049658   17722 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:25:33.049723   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 12:25:33.057238   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 12:25:33.070533   17722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:25:33.084130   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 12:25:33.098615   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0429 12:25:33.112003   17722 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:25:33.115112   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:33.124648   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:33.220871   17722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:25:33.235322   17722 certs.go:68] Setting up /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000 for IP: 192.169.0.53
	I0429 12:25:33.235337   17722 certs.go:194] generating shared ca certs ...
	I0429 12:25:33.235350   17722 certs.go:226] acquiring lock for ca certs: {Name:mk18b421bd78dac9f18e716e2a7f75e3dd2eab02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.235540   17722 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key
	I0429 12:25:33.235613   17722 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key
	I0429 12:25:33.235624   17722 certs.go:256] generating profile certs ...
	I0429 12:25:33.235671   17722 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key
	I0429 12:25:33.235685   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0429 12:25:33.339009   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt ...
	I0429 12:25:33.339025   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt: {Name:mkb1110c9ecfb391fd480dbb43740124cada1f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339372   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key ...
	I0429 12:25:33.339380   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key: {Name:mkd77fb4d6c3d30f48c31a52650895ec059a5da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339601   17722 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9
	I0429 12:25:33.339619   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.53 192.169.0.254]
	I0429 12:25:33.423656   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 ...
	I0429 12:25:33.423669   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9: {Name:mk814e943097b224c38c531359e60f0e011c83a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.454432   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 ...
	I0429 12:25:33.454489   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9: {Name:mkf400b8f0ffe83cfed5eaf2cbd6dc5c6a80d173 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.455578   17722 certs.go:381] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt
	I0429 12:25:33.455838   17722 certs.go:385] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key
	I0429 12:25:33.456077   17722 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key
	I0429 12:25:33.456094   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0429 12:25:33.747139   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0429 12:25:33.747160   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mk16d91ff8fca22baa28a0d26524b0eab36376d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747480   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key ...
	I0429 12:25:33.747491   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key: {Name:mk54d72d806a4c855d8c1362244d32326e1573f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747702   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:25:33.747734   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:25:33.747753   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:25:33.747772   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:25:33.747790   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:25:33.747808   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:25:33.747826   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:25:33.747844   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:25:33.747942   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem (1338 bytes)
	W0429 12:25:33.747991   17722 certs.go:480] ignoring /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0429 12:25:33.747999   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 12:25:33.748029   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem (1082 bytes)
	I0429 12:25:33.748058   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:25:33.748086   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem (1675 bytes)
	I0429 12:25:33.748159   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:33.748192   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:33.748211   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem -> /usr/share/ca-certificates/16301.pem
	I0429 12:25:33.748229   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /usr/share/ca-certificates/163012.pem
	I0429 12:25:33.748622   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:25:33.769199   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 12:25:33.789288   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:25:33.809631   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:25:33.829028   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 12:25:33.848280   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:25:33.867787   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:25:33.886985   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:25:33.908803   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:25:33.936146   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0429 12:25:33.955589   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0429 12:25:33.976759   17722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:25:33.990981   17722 ssh_runner.go:195] Run: openssl version
	I0429 12:25:33.995277   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0429 12:25:34.003661   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007097   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 19:19 /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007137   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.011417   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:25:34.019724   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:25:34.027921   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031378   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031414   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.035640   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:25:34.044197   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0429 12:25:34.052743   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056299   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 19:19 /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056347   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.060683   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0429 12:25:34.069054   17722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:25:34.072258   17722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:25:34.072301   17722 kubeadm.go:391] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:34.072393   17722 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 12:25:34.083740   17722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:25:34.091194   17722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:25:34.098727   17722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:25:34.106172   17722 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:25:34.106181   17722 kubeadm.go:156] found existing configuration files:
	
	I0429 12:25:34.106220   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:25:34.113138   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:25:34.113180   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:25:34.120473   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:25:34.127438   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:25:34.127477   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:25:34.134756   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.141743   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:25:34.141780   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.149046   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:25:34.156264   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:25:34.156318   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:25:34.163835   17722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:25:34.237910   17722 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:25:34.237955   17722 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:25:34.323023   17722 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:25:34.323108   17722 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:25:34.323194   17722 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:25:34.504766   17722 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:25:34.536903   17722 out.go:204]   - Generating certificates and keys ...
	I0429 12:25:34.536959   17722 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:25:34.537016   17722 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:25:34.879309   17722 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:25:34.943072   17722 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:25:35.013417   17722 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:25:35.182918   17722 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:25:35.261483   17722 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:25:35.261590   17722 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.419895   17722 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:25:35.420010   17722 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.479554   17722 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:25:35.607018   17722 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:25:35.903671   17722 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:25:35.904031   17722 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:25:36.142024   17722 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:25:36.288455   17722 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:25:36.556621   17722 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:25:36.634522   17722 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:25:36.899304   17722 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:25:36.899717   17722 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:25:36.902090   17722 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:25:36.923478   17722 out.go:204]   - Booting up control plane ...
	I0429 12:25:36.923546   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:25:36.923600   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:25:36.923664   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:25:36.923740   17722 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:25:36.923808   17722 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:25:36.923851   17722 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:25:37.025954   17722 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:25:37.026032   17722 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:25:38.035770   17722 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.010128476s
	I0429 12:25:38.035905   17722 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:25:43.764081   17722 kubeadm.go:309] [api-check] The API server is healthy after 5.731795501s
	I0429 12:25:43.772560   17722 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:25:43.789869   17722 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:25:43.805688   17722 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:25:43.805836   17722 kubeadm.go:309] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:25:43.811946   17722 kubeadm.go:309] [bootstrap-token] Using token: m6uvir.d4h53m26b1o2593q
	I0429 12:25:43.849413   17722 out.go:204]   - Configuring RBAC rules ...
	I0429 12:25:43.849574   17722 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:25:43.852729   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:25:43.892587   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:25:43.894879   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:25:43.896753   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:25:43.898705   17722 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:25:44.168425   17722 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:25:44.581231   17722 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:25:45.167548   17722 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:25:45.168144   17722 kubeadm.go:309] 
	I0429 12:25:45.168196   17722 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:25:45.168206   17722 kubeadm.go:309] 
	I0429 12:25:45.168263   17722 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:25:45.168268   17722 kubeadm.go:309] 
	I0429 12:25:45.168291   17722 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:25:45.168346   17722 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:25:45.168395   17722 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:25:45.168404   17722 kubeadm.go:309] 
	I0429 12:25:45.168448   17722 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:25:45.168454   17722 kubeadm.go:309] 
	I0429 12:25:45.168486   17722 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:25:45.168492   17722 kubeadm.go:309] 
	I0429 12:25:45.168531   17722 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:25:45.168592   17722 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:25:45.168639   17722 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:25:45.168643   17722 kubeadm.go:309] 
	I0429 12:25:45.168723   17722 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:25:45.168792   17722 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:25:45.168798   17722 kubeadm.go:309] 
	I0429 12:25:45.168870   17722 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.168957   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 \
	I0429 12:25:45.168980   17722 kubeadm.go:309] 	--control-plane 
	I0429 12:25:45.168984   17722 kubeadm.go:309] 
	I0429 12:25:45.169055   17722 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:25:45.169064   17722 kubeadm.go:309] 
	I0429 12:25:45.169140   17722 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.169225   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 
	I0429 12:25:45.169327   17722 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:25:45.169339   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:45.169343   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:45.192418   17722 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 12:25:45.250402   17722 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 12:25:45.254278   17722 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 12:25:45.254288   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 12:25:45.273385   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 12:25:45.493620   17722 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:25:45.493687   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.493694   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_04_29T12_25_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0429 12:25:45.612967   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.644835   17722 ops.go:34] apiserver oom_adj: -16
	I0429 12:25:46.113178   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:46.613113   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.114140   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.613848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.113291   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.613593   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.113154   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.613813   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.113663   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.613520   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.113510   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.613048   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.113848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.614240   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.113945   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.615040   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.113136   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.614312   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.113432   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.614732   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.113281   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.613014   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.114177   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.613453   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.726340   17722 kubeadm.go:1107] duration metric: took 12.232843816s to wait for elevateKubeSystemPrivileges
	W0429 12:25:57.726368   17722 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:25:57.726373   17722 kubeadm.go:393] duration metric: took 23.654349307s to StartCluster
	I0429 12:25:57.726388   17722 settings.go:142] acquiring lock: {Name:mk4619a2e41059be70c5979af83844e654a88068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.726480   17722 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.726968   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/kubeconfig: {Name:mkea524d41d79b030c40cdcb50fe58e264b35b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.727247   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:25:57.727263   17722 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:57.727278   17722 start.go:240] waiting for startup goroutines ...
	I0429 12:25:57.727289   17722 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:25:57.727325   17722 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0429 12:25:57.727331   17722 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0429 12:25:57.727351   17722 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0429 12:25:57.727356   17722 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0429 12:25:57.727369   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.727424   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:57.727601   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727602   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727616   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.727617   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.736886   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61984
	I0429 12:25:57.737295   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.737678   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.737690   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.737981   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61986
	I0429 12:25:57.738074   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.738331   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.738533   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.738579   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.738658   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.738671   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.739264   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.739478   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.739646   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.739702   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.741554   17722 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.741775   17722 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdd3cee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:25:57.742264   17722 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 12:25:57.742421   17722 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0429 12:25:57.742443   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.742660   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.742677   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.747696   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61988
	I0429 12:25:57.748147   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.748482   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.748498   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.748745   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.748874   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.748975   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.749041   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.750035   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.788044   17722 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:25:57.751748   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61990
	I0429 12:25:57.825563   17722 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.825579   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:25:57.825595   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.825776   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.825980   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.826026   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.826183   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.826334   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.826500   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.826516   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.826839   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.827237   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.827262   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.835538   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:25:57.836072   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61993
	I0429 12:25:57.836372   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.836712   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.836731   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.836941   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.837051   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.837149   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.837235   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.838190   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.838373   17722 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:57.838382   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:25:57.838392   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.838527   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.838615   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.838701   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.838792   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.921992   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.963950   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:58.239892   17722 start.go:946] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0429 12:25:58.350022   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350036   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350091   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350105   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350223   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350224   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350239   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350250   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350258   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350278   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350288   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350302   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350302   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350309   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350428   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350439   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350445   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350487   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350497   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350532   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350553   17722 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 12:25:58.350558   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.350566   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.350571   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.356148   17722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:25:58.356671   17722 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 12:25:58.356684   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.356690   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.356695   17722 round_trippers.go:473]     Content-Type: application/json
	I0429 12:25:58.356699   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.359632   17722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:25:58.359767   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.359777   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.359915   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.359935   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.359942   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.383304   17722 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 12:25:58.425404   17722 addons.go:505] duration metric: took 698.121896ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 12:25:58.425460   17722 start.go:245] waiting for cluster config update ...
	I0429 12:25:58.425487   17722 start.go:254] writing updated cluster config ...
	I0429 12:25:58.541992   17722 out.go:177] 
	I0429 12:25:58.562923   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:58.563008   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.583885   17722 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0429 12:25:58.624732   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:58.624767   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:58.624960   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:58.624976   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:58.625077   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.625802   17722 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:58.625882   17722 start.go:364] duration metric: took 63.962µs to acquireMachinesLock for "ha-431000-m02"
	I0429 12:25:58.625908   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:58.625983   17722 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0429 12:25:58.647279   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:58.647410   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:58.647441   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:58.657607   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61997
	I0429 12:25:58.657937   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:58.658289   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:58.658303   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:58.658522   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:58.658625   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:25:58.658714   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:25:58.658812   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:58.658829   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:58.658858   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:58.658895   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658904   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658941   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:58.658966   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658976   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658987   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:58.658993   17722 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0429 12:25:58.659060   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.659082   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:25:58.684517   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:58.684548   17722 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0429 12:25:58.684719   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.685030   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.684708   17751 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:58.685090   17722 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:58.880461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.880334   17751 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa...
	I0429 12:25:58.972148   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972072   17751 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0429 12:25:58.972171   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0429 12:25:58.972194   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0429 12:25:58.972426   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972393   17751 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02 ...
	I0429 12:25:59.330149   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.330167   17722 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0429 12:25:59.330184   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID 9365fc26-c2e1-4d51-b9c4-d17102d62f4c
	I0429 12:25:59.355313   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC ca:19:33:23:af:66
	I0429 12:25:59.355328   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:59.355356   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355382   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355419   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9365fc26-c2e1-4d51-b9c4-d17102d62f4c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/
machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:59.355452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9365fc26-c2e1-4d51-b9c4-d17102d62f4c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:59.355465   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:59.358434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Pid is 17752
	I0429 12:25:59.359551   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0429 12:25:59.359562   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.359638   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:25:59.360574   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:25:59.360681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:59.360697   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:59.360720   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:59.360739   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:59.360752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:59.360762   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:59.360776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:59.360790   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:59.360800   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:59.360809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:59.360829   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:59.360841   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:59.360856   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:59.360868   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:59.360879   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:59.360894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:59.360903   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:59.360932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:59.360940   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:59.360949   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:59.360958   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:59.360966   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:59.360973   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:59.360981   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:59.360987   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:59.360995   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:59.361001   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:59.361007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:59.361029   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:59.361043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:59.361055   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:59.361062   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:59.361069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:59.361077   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:59.361085   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:59.361094   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:59.361102   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:59.361111   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:59.361122   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:59.361132   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:59.361139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:59.361146   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:59.361155   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:59.361168   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:59.361180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:59.361200   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:59.361215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:59.361230   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:59.361243   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:59.361254   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:59.361267   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:59.361281   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:59.361294   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:59.365988   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:59.374154   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:59.374983   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.375007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.375049   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.375071   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.757805   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:59.757822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:59.872427   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.872442   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.872451   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.872459   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.873348   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:59.873361   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:26:01.362461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0429 12:26:01.362478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:01.362589   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:01.363375   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:01.363463   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:01.363472   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:01.363482   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:01.363495   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:01.363505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:01.363523   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:01.363540   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:01.363555   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:01.363564   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:01.363570   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:01.363580   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:01.363585   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:01.363594   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:01.363601   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:01.363606   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:01.363615   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:01.363620   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:01.363627   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:01.363641   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:01.363649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:01.363656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:01.363664   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:01.363671   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:01.363678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:01.363685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:01.363691   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:01.363698   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:01.363706   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:01.363715   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:01.363729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:01.363745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:01.363752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:01.363758   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:01.363763   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:01.363770   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:01.363779   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:01.363786   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:01.363791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:01.363806   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:01.363819   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:01.363827   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:01.363833   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:01.363839   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:01.363847   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:01.363854   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:01.363862   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:01.363876   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:01.363887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:01.363897   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:01.363909   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:01.363929   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:01.363942   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:01.363951   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:03.364674   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0429 12:26:03.364689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:03.364761   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:03.365538   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:03.365618   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:03.365628   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:03.365649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:03.365656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:03.365663   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:03.365668   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:03.365681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:03.365689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:03.365701   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:03.365714   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:03.365721   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:03.365734   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:03.365745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:03.365754   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:03.365760   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:03.365766   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:03.365776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:03.365783   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:03.365789   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:03.365795   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:03.365802   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:03.365809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:03.365820   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:03.365828   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:03.365835   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:03.365840   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:03.365850   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:03.365857   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:03.365871   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:03.365880   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:03.365887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:03.365894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:03.365906   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:03.365922   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:03.365930   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:03.365939   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:03.365955   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:03.365964   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:03.365971   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:03.365979   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:03.365986   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:03.365993   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:03.366000   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:03.366008   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:03.366015   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:03.366022   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:03.366028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:03.366035   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:03.366043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:03.366050   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:03.366067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:03.366079   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:03.366088   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:05.177799   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0429 12:26:05.177845   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0429 12:26:05.177853   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0429 12:26:05.201640   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0429 12:26:05.367080   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0429 12:26:05.367108   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:05.367266   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:05.368694   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:05.368836   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:05.368855   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:05.368872   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:05.368902   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:05.368917   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:05.368932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:05.368944   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:05.368959   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:05.368998   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:05.369028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:05.369054   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:05.369067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:05.369081   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:05.369089   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:05.369107   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:05.369129   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:05.369139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:05.369150   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:05.369170   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:05.369187   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:05.369198   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:05.369209   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:05.369224   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:05.369237   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:05.369248   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:05.369260   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:05.369269   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:05.369280   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:05.369290   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:05.369300   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:05.369309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:05.369317   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:05.369342   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:05.369360   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:05.369371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:05.369383   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:05.369391   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:05.369402   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:05.369412   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:05.369420   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:05.369429   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:05.369437   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:05.369446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:05.369456   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:05.369467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:05.369476   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:05.369491   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:05.369505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:05.369516   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:05.369528   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:05.369539   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:05.369549   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:05.369556   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:07.369177   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0429 12:26:07.369192   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:07.369292   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:07.370069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:07.370166   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:07.370180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:07.370193   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:07.370204   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:07.370215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:07.370228   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:07.370247   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:07.370259   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:07.370274   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:07.370287   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:07.370296   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:07.370304   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:07.370309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:07.370315   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:07.370344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:07.370354   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:07.370363   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:07.370371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:07.370384   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:07.370397   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:07.370403   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:07.370411   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:07.370418   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:07.370423   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:07.370438   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:07.370446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:07.370452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:07.370467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:07.370478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:07.370508   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:07.370534   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:07.370550   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:07.370567   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:07.370582   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:07.370595   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:07.370607   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:07.370629   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:07.370639   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:07.370646   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:07.370659   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:07.370669   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:07.370678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:07.370685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:07.370695   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:07.370729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:07.370740   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:07.370745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:07.370768   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:07.370775   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:07.370785   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:07.370791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:07.370807   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:07.370822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:09.370416   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0429 12:26:09.370434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.370529   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.371299   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:09.371370   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0429 12:26:09.371380   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:ca:19:33:23:af:66 ID:1,ca:19:33:23:af:66 Lease:0x663145d0}
	I0429 12:26:09.371387   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: ca:19:33:23:af:66
	I0429 12:26:09.371394   17722 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.54
	I0429 12:26:09.371443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:09.371998   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372094   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372184   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:26:09.372192   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0429 12:26:09.372282   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.372344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.373081   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:26:09.373088   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:26:09.373091   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:26:09.373096   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:09.373185   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:09.373265   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373344   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373427   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:09.373540   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:09.373715   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:09.373722   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:26:09.398851   17722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0429 12:26:12.461040   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.461055   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:26:12.461061   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.461182   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.461292   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461377   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461470   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.461607   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.461746   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.461754   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:26:12.523397   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:26:12.523438   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:26:12.523445   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:26:12.523450   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523578   17722 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0429 12:26:12.523587   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523662   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.523743   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.523829   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.523933   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.524027   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.524160   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.524319   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.524328   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0429 12:26:12.595067   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0429 12:26:12.595081   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.595219   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.595311   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595396   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595487   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.595609   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.595751   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.595763   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:26:12.665199   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.665226   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:26:12.665245   17722 buildroot.go:174] setting up certificates
	I0429 12:26:12.665255   17722 provision.go:84] configureAuth start
	I0429 12:26:12.665262   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.665398   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:12.665483   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.665568   17722 provision.go:143] copyHostCerts
	I0429 12:26:12.665595   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665640   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:26:12.665646   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665779   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:26:12.665976   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666005   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:26:12.666009   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666094   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:26:12.666237   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666265   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:26:12.666269   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666334   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:26:12.666486   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.54 ha-431000-m02 localhost minikube]
	I0429 12:26:12.788915   17722 provision.go:177] copyRemoteCerts
	I0429 12:26:12.788965   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:26:12.788978   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.789131   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.789238   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.789318   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.789407   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:12.827773   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:26:12.827848   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:26:12.847569   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:26:12.847650   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:26:12.867106   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:26:12.867164   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:26:12.886794   17722 provision.go:87] duration metric: took 221.533222ms to configureAuth
	I0429 12:26:12.886806   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:26:12.886945   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:26:12.886958   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:12.887088   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.887189   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.887288   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887445   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.887559   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.887688   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.887695   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:26:12.949449   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:26:12.960507   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:26:12.960630   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:26:12.960650   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.960811   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.960918   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961031   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961144   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.961314   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.961496   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.961559   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.53"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:26:13.034441   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.53
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:26:13.034459   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:13.034584   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:13.034670   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034771   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034868   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:13.034990   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:13.035131   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:13.035143   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:26:14.527695   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:26:14.527713   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:26:14.527719   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0429 12:26:14.527859   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:26:14.527867   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:26:14.527872   17722 client.go:171] duration metric: took 15.869221462s to LocalClient.Create
	I0429 12:26:14.527883   17722 start.go:167] duration metric: took 15.869255705s to libmachine.API.Create "ha-431000"
	I0429 12:26:14.527888   17722 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0429 12:26:14.527895   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:26:14.527905   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.528059   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:26:14.528083   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.528190   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.528277   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.528371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.528462   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.571874   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:26:14.575715   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:26:14.575725   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:26:14.575820   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:26:14.575979   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:26:14.575985   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:26:14.576144   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:26:14.585193   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:26:14.617800   17722 start.go:296] duration metric: took 89.904272ms for postStartSetup
	I0429 12:26:14.617830   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:14.618443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.618595   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:26:14.619362   17722 start.go:128] duration metric: took 15.993555623s to createHost
	I0429 12:26:14.619378   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.619471   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.619554   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619637   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619749   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.619852   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:14.619986   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:14.619993   17722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:26:14.684608   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418773.901729701
	
	I0429 12:26:14.684618   17722 fix.go:216] guest clock: 1714418773.901729701
	I0429 12:26:14.684638   17722 fix.go:229] Guest: 2024-04-29 12:26:13.901729701 -0700 PDT Remote: 2024-04-29 12:26:14.619372 -0700 PDT m=+66.737316646 (delta=-717.642299ms)
	I0429 12:26:14.684650   17722 fix.go:200] guest clock delta is within tolerance: -717.642299ms
	I0429 12:26:14.684654   17722 start.go:83] releasing machines lock for "ha-431000-m02", held for 16.058950363s
	I0429 12:26:14.684671   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.684806   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.706148   17722 out.go:177] * Found network options:
	I0429 12:26:14.727219   17722 out.go:177]   - NO_PROXY=192.169.0.53
	W0429 12:26:14.749052   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.749119   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.749950   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750211   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750321   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:26:14.750357   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0429 12:26:14.750432   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.750551   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:26:14.750572   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.750573   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750783   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750814   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751054   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751092   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751333   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.751365   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751506   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0429 12:26:14.787217   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:26:14.787277   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:26:14.848201   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:26:14.848227   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:14.848358   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:14.865009   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:26:14.874483   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:26:14.883536   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:26:14.883585   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:26:14.892833   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.901853   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:26:14.910642   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.919451   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:26:14.928628   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:26:14.937504   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:26:14.946508   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:26:14.955539   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:26:14.963731   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:26:14.971817   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.075369   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:26:15.095140   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:15.095227   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:26:15.114227   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.131639   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:26:15.152206   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.163545   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.173829   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:26:15.196676   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.207120   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:15.222241   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:26:15.225235   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:26:15.232398   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:26:15.245911   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:26:15.342741   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:26:15.443507   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:26:15.443528   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:26:15.456917   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.554009   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:27:16.592140   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.038821319s)
	I0429 12:27:16.592205   17722 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 12:27:16.627966   17722 out.go:177] 
	W0429 12:27:16.650769   17722 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 19:26:12 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.562442558Z" level=info msg="Starting up"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.563076350Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.567928747Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=527
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.583689216Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597320910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597367066Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597410988Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597446593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597506031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597540413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597805772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597892122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597933427Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597944227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598009110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598199858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599844343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599884447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599970866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600003444Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600098323Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600162313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600172836Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602841911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602899413Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602913390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602923442Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602932201Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603001707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603176643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603272136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603306202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603317268Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603331817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603343600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603351894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603363377Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603373358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603384153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603392284Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603399425Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603411790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603424697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603433133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603441352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603448955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603456927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603465082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603473249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603488023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603499629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603507180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603514591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603523215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603533846Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603547413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603558028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603566468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603593185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603603060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603610561Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603617284Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603705135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603740552Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603750151Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603879153Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603936791Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603967392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603999139Z" level=info msg="containerd successfully booted in 0.021874s"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.594997677Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.598794560Z" level=info msg="Loading containers: start."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.706853403Z" level=info msg="Loading containers: done."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713747539Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713886174Z" level=info msg="Daemon has completed initialization"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.742027280Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 19:26:13 ha-431000-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.744424276Z" level=info msg="API listen on [::]:2376"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.784502897Z" level=info msg="Processing signal 'terminated'"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785439200Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785771501Z" level=info msg="Daemon shutdown complete"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785832350Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785935215Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 19:26:14 ha-431000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 dockerd[864]: time="2024-04-29T19:26:15.839252866Z" level=info msg="Starting up"
	Apr 29 19:27:16 ha-431000-m02 dockerd[864]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 12:27:16.650883   17722 out.go:239] * 
	W0429 12:27:16.652130   17722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 12:27:16.735182   17722 out.go:177] 
	
	
	==> Docker <==
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.315145065Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.315211849Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.327294175Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.327360020Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.327395808Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.327464804Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.344544581Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.344650904Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.344675811Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.344832310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 cri-dockerd[1017]: time="2024-04-29T19:26:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/5910bbb30653433cc58b20067dbb519b7ffcd4b3262962dfa408e6e981ae7497/resolv.conf as [nameserver 192.169.0.1]"
	Apr 29 19:26:07 ha-431000 cri-dockerd[1017]: time="2024-04-29T19:26:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/b653ad03a91d0b2b9ad7480ff955b642746670e2d4b4451f426a163dc31b1339/resolv.conf as [nameserver 192.169.0.1]"
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.563933092Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.564322761Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.564336925Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.564471753Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 cri-dockerd[1017]: time="2024-04-29T19:26:07Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/1d309f3bad709379854b5ff94877311ba9efe4542cabd99cd0cdeeb95dfeb396/resolv.conf as [nameserver 192.169.0.1]"
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.651004137Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.651950662Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.652190959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.653396462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.680700376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.680924340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.681014604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.681180406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	1eccc3b061df0       cbb01a7bd410d                                                                                       About a minute ago   Running             coredns                   0                   1d309f3bad709       coredns-7db6d8ff4d-42q4w
	5fc9c9934ce6d       cbb01a7bd410d                                                                                       About a minute ago   Running             coredns                   0                   b653ad03a91d0       coredns-7db6d8ff4d-vpt8j
	2fe73ca28b695       6e38f40d628db                                                                                       About a minute ago   Running             storage-provisioner       0                   5910bbb306534       storage-provisioner
	2fc7f6cf26c66       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988            About a minute ago   Running             kindnet-cni               0                   77287d05e6e30       kindnet-7v2x2
	dd7e6dcf8c180       a0bf559e280cf                                                                                       About a minute ago   Running             kube-proxy                0                   68ae02827d8f8       kube-proxy-24zxg
	25e21b3064d0f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f   About a minute ago   Running             kube-vip                  0                   eb02781b4cb9e       kube-vip-ha-431000
	253cb761306be       c42f13656d0b2                                                                                       About a minute ago   Running             kube-apiserver            0                   0b29b8a237387       kube-apiserver-ha-431000
	8689b2595f852       c7aad43836fa5                                                                                       About a minute ago   Running             kube-controller-manager   0                   9353c1ab6af44       kube-controller-manager-ha-431000
	395f732c4ecd8       259c8277fcbbc                                                                                       About a minute ago   Running             kube-scheduler            0                   b5d69e14edefa       kube-scheduler-ha-431000
	f0af9c768f874       3861cfcd7c04c                                                                                       About a minute ago   Running             etcd                      0                   d3b2bec8afdb2       etcd-ha-431000
	
	
	==> coredns [1eccc3b061df] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59271 - 28720 "HINFO IN 7673301142508958601.5024328155936603833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011541653s
	
	
	==> coredns [5fc9c9934ce6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46682 - 61825 "HINFO IN 2083561514213073366.6289946638194081086. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012255792s
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_25_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:25:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:27:16 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:26:15 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:26:15 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:26:15 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:26:15 +0000   Mon, 29 Apr 2024 19:26:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.53
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 0347e3fb0470420c86cb19dc5e37eb8c
	  System UUID:                24bd46dd-0000-0000-965c-91e20eb4b3f3
	  Boot ID:                    5a51c8ef-7deb-418c-a18b-22ccf6b75cab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (10 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-7db6d8ff4d-42q4w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 coredns-7db6d8ff4d-vpt8j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     80s
	  kube-system                 etcd-ha-431000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         94s
	  kube-system                 kindnet-7v2x2                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      80s
	  kube-system                 kube-apiserver-ha-431000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-controller-manager-ha-431000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-proxy-24zxg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-scheduler-ha-431000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         94s
	  kube-system                 kube-vip-ha-431000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         96s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 79s   kube-proxy       
	  Normal  Starting                 94s   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  94s   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  94s   kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    94s   kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     94s   kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           80s   node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeReady                72s   kubelet          Node ha-431000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.007362] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.683718] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.250560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.108677] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +0.110997] systemd-fstab-generator[512]: Ignoring "noauto" option for root device
	[  +1.889303] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +0.059318] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.256507] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.100729] systemd-fstab-generator[782]: Ignoring "noauto" option for root device
	[  +0.117080] systemd-fstab-generator[796]: Ignoring "noauto" option for root device
	[  +2.384127] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	[  +0.098660] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.104412] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.132603] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +4.124699] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.056838] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.446594] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +3.800237] systemd-fstab-generator[1492]: Ignoring "noauto" option for root device
	[  +0.050917] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.212746] systemd-fstab-generator[1990]: Ignoring "noauto" option for root device
	[  +0.076591] kauditd_printk_skb: 72 callbacks suppressed
	[ +14.491572] kauditd_printk_skb: 12 callbacks suppressed
	[Apr29 19:26] kauditd_printk_skb: 38 callbacks suppressed
	
	
	==> etcd [f0af9c768f87] <==
	{"level":"info","ts":"2024-04-29T19:25:39.441876Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 switched to configuration voters=(710541637035600800)"}
	{"level":"info","ts":"2024-04-29T19:25:39.442094Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"ab7009b4269bb52","local-member-id":"9dc59dab1a1aba0","added-peer-id":"9dc59dab1a1aba0","added-peer-peer-urls":["https://192.169.0.53:2380"]}
	{"level":"info","ts":"2024-04-29T19:25:39.451987Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-04-29T19:25:39.452287Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9dc59dab1a1aba0","initial-advertise-peer-urls":["https://192.169.0.53:2380"],"listen-peer-urls":["https://192.169.0.53:2380"],"advertise-client-urls":["https://192.169.0.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:25:39.452539Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:25:39.452814Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.53:2380"}
	{"level":"info","ts":"2024-04-29T19:25:39.4529Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.53:2380"}
	{"level":"info","ts":"2024-04-29T19:25:40.22976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.229854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.22989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 received MsgPreVoteResp from 9dc59dab1a1aba0 at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.229907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.22992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 received MsgVoteResp from 9dc59dab1a1aba0 at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.229934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became leader at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.229975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc59dab1a1aba0 elected leader 9dc59dab1a1aba0 at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.232269Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9dc59dab1a1aba0","local-member-attributes":"{Name:ha-431000 ClientURLs:[https://192.169.0.53:2379]}","request-path":"/0/members/9dc59dab1a1aba0/attributes","cluster-id":"ab7009b4269bb52","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:25:40.232472Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:25:40.232823Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.233029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:25:40.233086Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:25:40.236119Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T19:25:40.237354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:25:40.237915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.53:2379"}
	{"level":"info","ts":"2024-04-29T19:25:40.284926Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ab7009b4269bb52","local-member-id":"9dc59dab1a1aba0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.285192Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.285242Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	
	
	==> kernel <==
	 19:27:18 up 2 min,  0 users,  load average: 0.43, 0.20, 0.07
	Linux ha-431000 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2fc7f6cf26c6] <==
	I0429 19:26:02.281417       1 main.go:102] connected to apiserver: https://10.96.0.1:443
	I0429 19:26:02.281479       1 main.go:107] hostIP = 192.169.0.53
	podIP = 192.169.0.53
	I0429 19:26:02.281635       1 main.go:116] setting mtu 1500 for CNI 
	I0429 19:26:02.281666       1 main.go:146] kindnetd IP family: "ipv4"
	I0429 19:26:02.281682       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0429 19:26:02.489706       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:26:02.580129       1 main.go:227] handling current node
	I0429 19:26:12.591450       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:26:12.591695       1 main.go:227] handling current node
	I0429 19:26:22.602686       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:26:22.602745       1 main.go:227] handling current node
	I0429 19:26:32.613096       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:26:32.613250       1 main.go:227] handling current node
	I0429 19:26:42.619252       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:26:42.619297       1 main.go:227] handling current node
	I0429 19:26:52.630415       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:26:52.630449       1 main.go:227] handling current node
	I0429 19:27:02.632654       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:27:02.632689       1 main.go:227] handling current node
	I0429 19:27:12.636464       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:27:12.636839       1 main.go:227] handling current node
	
	
	==> kube-apiserver [253cb761306b] <==
	I0429 19:25:41.295120       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0429 19:25:41.295150       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0429 19:25:41.295387       1 shared_informer.go:320] Caches are synced for configmaps
	I0429 19:25:41.298083       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:25:41.298355       1 controller.go:615] quota admission added evaluator for: namespaces
	I0429 19:25:41.298726       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:25:41.298734       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:25:41.298739       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:25:41.298742       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:25:41.341235       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:25:42.199810       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 19:25:42.203107       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 19:25:42.203598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:25:42.475929       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:25:42.501043       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:25:42.531435       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 19:25:42.536355       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.53]
	I0429 19:25:42.537008       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 19:25:42.539635       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 19:25:43.279622       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:25:44.788367       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:25:44.794388       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 19:25:44.803386       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:25:58.079997       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 19:25:58.240228       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	
	
	==> kube-controller-manager [8689b2595f85] <==
	I0429 19:25:58.179791       1 shared_informer.go:320] Caches are synced for daemon sets
	I0429 19:25:58.181495       1 shared_informer.go:320] Caches are synced for attach detach
	I0429 19:25:58.191267       1 shared_informer.go:320] Caches are synced for taint
	I0429 19:25:58.191382       1 node_lifecycle_controller.go:1227] "Initializing eviction metric for zone" logger="node-lifecycle-controller" zone=""
	I0429 19:25:58.191425       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-431000"
	I0429 19:25:58.191449       1 node_lifecycle_controller.go:1031] "Controller detected that all Nodes are not-Ready. Entering master disruption mode" logger="node-lifecycle-controller"
	I0429 19:25:58.194805       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 19:25:58.197294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.965304ms"
	I0429 19:25:58.210327       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-431000" podCIDRs=["10.244.0.0/24"]
	I0429 19:25:58.243038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.71687ms"
	I0429 19:25:58.317120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.53619ms"
	I0429 19:25:58.317171       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.981µs"
	I0429 19:25:58.580329       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:25:58.580784       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 19:25:58.589175       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:26:06.946975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.17µs"
	I0429 19:26:06.950414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.785µs"
	I0429 19:26:06.958967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.397µs"
	I0429 19:26:06.969829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.121µs"
	I0429 19:26:07.886482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.564µs"
	I0429 19:26:07.917554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.073261ms"
	I0429 19:26:07.917615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.945µs"
	I0429 19:26:07.935068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.658277ms"
	I0429 19:26:07.935131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.868µs"
	I0429 19:26:08.387295       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	
	
	==> kube-proxy [dd7e6dcf8c18] <==
	I0429 19:25:59.229445       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:25:59.241484       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.53"]
	I0429 19:25:59.271201       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:25:59.271245       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:25:59.271262       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:25:59.273894       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:25:59.274358       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:25:59.274426       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:25:59.275610       1 config.go:192] "Starting service config controller"
	I0429 19:25:59.275903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:25:59.275968       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:25:59.276082       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:25:59.276583       1 config.go:319] "Starting node config controller"
	I0429 19:25:59.277663       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:25:59.376395       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:25:59.376459       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:25:59.377863       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [395f732c4ecd] <==
	W0429 19:25:41.293554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:25:41.294406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:25:41.294738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:25:41.294772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:25:41.295245       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 19:25:41.295295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 19:25:41.295534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:25:41.295865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 19:25:41.296096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:25:41.296336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:25:41.296557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:25:41.296590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:25:41.296636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:25:41.296645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:25:41.297185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:25:41.297218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:25:41.298191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 19:25:41.298225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 19:25:42.168872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:25:42.168946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:25:42.211386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:25:42.211426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:25:42.376364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:25:42.376406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0429 19:25:42.882998       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 19:25:58 ha-431000 kubelet[1997]: I0429 19:25:58.337182    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/67d75fae-ecac-4315-82ee-5daf7d1125ad-lib-modules\") pod \"kube-proxy-24zxg\" (UID: \"67d75fae-ecac-4315-82ee-5daf7d1125ad\") " pod="kube-system/kube-proxy-24zxg"
	Apr 29 19:25:58 ha-431000 kubelet[1997]: I0429 19:25:58.337194    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/67d75fae-ecac-4315-82ee-5daf7d1125ad-xtables-lock\") pod \"kube-proxy-24zxg\" (UID: \"67d75fae-ecac-4315-82ee-5daf7d1125ad\") " pod="kube-system/kube-proxy-24zxg"
	Apr 29 19:25:58 ha-431000 kubelet[1997]: I0429 19:25:58.337208    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79ldm\" (UniqueName: \"kubernetes.io/projected/f2c484e9-2f51-4a05-9f2c-f242e6851902-kube-api-access-79ldm\") pod \"kindnet-7v2x2\" (UID: \"f2c484e9-2f51-4a05-9f2c-f242e6851902\") " pod="kube-system/kindnet-7v2x2"
	Apr 29 19:25:58 ha-431000 kubelet[1997]: I0429 19:25:58.337224    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/f2c484e9-2f51-4a05-9f2c-f242e6851902-cni-cfg\") pod \"kindnet-7v2x2\" (UID: \"f2c484e9-2f51-4a05-9f2c-f242e6851902\") " pod="kube-system/kindnet-7v2x2"
	Apr 29 19:25:58 ha-431000 kubelet[1997]: I0429 19:25:58.337236    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/67d75fae-ecac-4315-82ee-5daf7d1125ad-kube-proxy\") pod \"kube-proxy-24zxg\" (UID: \"67d75fae-ecac-4315-82ee-5daf7d1125ad\") " pod="kube-system/kube-proxy-24zxg"
	Apr 29 19:25:58 ha-431000 kubelet[1997]: I0429 19:25:58.337247    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jj9k\" (UniqueName: \"kubernetes.io/projected/67d75fae-ecac-4315-82ee-5daf7d1125ad-kube-api-access-2jj9k\") pod \"kube-proxy-24zxg\" (UID: \"67d75fae-ecac-4315-82ee-5daf7d1125ad\") " pod="kube-system/kube-proxy-24zxg"
	Apr 29 19:25:59 ha-431000 kubelet[1997]: I0429 19:25:59.815236    1997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-24zxg" podStartSLOduration=1.815225495 podStartE2EDuration="1.815225495s" podCreationTimestamp="2024-04-29 19:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 19:25:59.814975678 +0000 UTC m=+15.269944559" watchObservedRunningTime="2024-04-29 19:25:59.815225495 +0000 UTC m=+15.270194374"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.927621    1997 kubelet_node_status.go:497] "Fast updating node status as it just became ready"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.944398    1997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kindnet-7v2x2" podStartSLOduration=6.077235064 podStartE2EDuration="8.944384872s" podCreationTimestamp="2024-04-29 19:25:58 +0000 UTC" firstStartedPulling="2024-04-29 19:25:59.20314383 +0000 UTC m=+14.658112700" lastFinishedPulling="2024-04-29 19:26:02.070293637 +0000 UTC m=+17.525262508" observedRunningTime="2024-04-29 19:26:02.853611928 +0000 UTC m=+18.308580809" watchObservedRunningTime="2024-04-29 19:26:06.944384872 +0000 UTC m=+22.399353747"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.944670    1997 topology_manager.go:215] "Topology Admit Handler" podUID="68dbc91f-0d44-406b-8aa3-44f63ee6733f" podNamespace="kube-system" podName="coredns-7db6d8ff4d-vpt8j"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.947182    1997 topology_manager.go:215] "Topology Admit Handler" podUID="4db91ee3-c284-4465-92c7-67d3a6a25207" podNamespace="kube-system" podName="storage-provisioner"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.949465    1997 topology_manager.go:215] "Topology Admit Handler" podUID="aa555ed7-b08b-4566-bcc8-73241e83e54b" podNamespace="kube-system" podName="coredns-7db6d8ff4d-42q4w"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.998093    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4db91ee3-c284-4465-92c7-67d3a6a25207-tmp\") pod \"storage-provisioner\" (UID: \"4db91ee3-c284-4465-92c7-67d3a6a25207\") " pod="kube-system/storage-provisioner"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.998191    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vjt7k\" (UniqueName: \"kubernetes.io/projected/aa555ed7-b08b-4566-bcc8-73241e83e54b-kube-api-access-vjt7k\") pod \"coredns-7db6d8ff4d-42q4w\" (UID: \"aa555ed7-b08b-4566-bcc8-73241e83e54b\") " pod="kube-system/coredns-7db6d8ff4d-42q4w"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.998213    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/68dbc91f-0d44-406b-8aa3-44f63ee6733f-config-volume\") pod \"coredns-7db6d8ff4d-vpt8j\" (UID: \"68dbc91f-0d44-406b-8aa3-44f63ee6733f\") " pod="kube-system/coredns-7db6d8ff4d-vpt8j"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.998227    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x45kh\" (UniqueName: \"kubernetes.io/projected/68dbc91f-0d44-406b-8aa3-44f63ee6733f-kube-api-access-x45kh\") pod \"coredns-7db6d8ff4d-vpt8j\" (UID: \"68dbc91f-0d44-406b-8aa3-44f63ee6733f\") " pod="kube-system/coredns-7db6d8ff4d-vpt8j"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.998302    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxwnn\" (UniqueName: \"kubernetes.io/projected/4db91ee3-c284-4465-92c7-67d3a6a25207-kube-api-access-pxwnn\") pod \"storage-provisioner\" (UID: \"4db91ee3-c284-4465-92c7-67d3a6a25207\") " pod="kube-system/storage-provisioner"
	Apr 29 19:26:06 ha-431000 kubelet[1997]: I0429 19:26:06.998315    1997 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa555ed7-b08b-4566-bcc8-73241e83e54b-config-volume\") pod \"coredns-7db6d8ff4d-42q4w\" (UID: \"aa555ed7-b08b-4566-bcc8-73241e83e54b\") " pod="kube-system/coredns-7db6d8ff4d-42q4w"
	Apr 29 19:26:07 ha-431000 kubelet[1997]: I0429 19:26:07.907516    1997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-7db6d8ff4d-42q4w" podStartSLOduration=9.907500641 podStartE2EDuration="9.907500641s" podCreationTimestamp="2024-04-29 19:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 19:26:07.889310678 +0000 UTC m=+23.344279559" watchObservedRunningTime="2024-04-29 19:26:07.907500641 +0000 UTC m=+23.362469519"
	Apr 29 19:26:07 ha-431000 kubelet[1997]: I0429 19:26:07.927378    1997 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/storage-provisioner" podStartSLOduration=9.92736536 podStartE2EDuration="9.92736536s" podCreationTimestamp="2024-04-29 19:25:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2024-04-29 19:26:07.918817121 +0000 UTC m=+23.373785995" watchObservedRunningTime="2024-04-29 19:26:07.92736536 +0000 UTC m=+23.382334240"
	Apr 29 19:26:44 ha-431000 kubelet[1997]: E0429 19:26:44.685996    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:26:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:26:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:26:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:26:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [2fe73ca28b69] <==
	I0429 19:26:07.672573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 19:26:07.699495       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 19:26:07.699524       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 19:26:07.714130       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 19:26:07.729973       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-431000_95565a70-289a-4179-a0be-cfc0aea5e428!
	I0429 19:26:07.727595       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"198bfd0e-cd93-4191-9956-e0c2576aab68", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-431000_95565a70-289a-4179-a0be-cfc0aea5e428 became leader
	I0429 19:26:07.830551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-431000_95565a70-289a-4179-a0be-cfc0aea5e428!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/StartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/StartCluster (131.68s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (720s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- rollout status deployment/busybox
E0429 12:28:51.678428   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:51.684524   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:51.695014   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:51.716695   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:51.757962   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:51.838853   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:52.000223   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:52.320509   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:52.961802   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:54.243037   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:28:55.279843   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:28:56.804849   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:29:01.925878   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:29:12.167220   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:29:32.647357   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:30:13.607461   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:31:35.527454   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:33:51.677636   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:33:55.276808   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:34:19.367621   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:35:18.339438   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
ha_test.go:133: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- rollout status deployment/busybox: exit status 1 (10m4.024645879s)

                                                
                                                
-- stdout --
	Waiting for deployment "busybox" rollout to finish: 0 of 3 updated replicas are available...
	Waiting for deployment "busybox" rollout to finish: 1 of 3 updated replicas are available...

                                                
                                                
-- /stdout --
** stderr ** 
	error: deployment "busybox" exceeded its progress deadline

                                                
                                                
** /stderr **
ha_test.go:135: failed to deploy busybox to ha (multi-control plane) cluster
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
E0429 12:38:51.676353   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:38:55.275588   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:149: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:159: failed to resolve pod IPs: expected 3 Pod IPs but got 1 (may be temporary), output: "\n-- stdout --\n\t'10.244.0.4'\n\n-- /stdout --"
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-fwxpx -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-fwxpx -- nslookup kubernetes.io: exit status 1 (129.156538ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-fwxpx does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-fwxpx could not resolve 'kubernetes.io': exit status 1
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-hb957 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-lrnxn -- nslookup kubernetes.io
ha_test.go:171: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-lrnxn -- nslookup kubernetes.io: exit status 1 (129.476445ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-lrnxn does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:173: Pod busybox-fc5497c4f-lrnxn could not resolve 'kubernetes.io': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-fwxpx -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-fwxpx -- nslookup kubernetes.default: exit status 1 (131.863289ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-fwxpx does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-fwxpx could not resolve 'kubernetes.default': exit status 1
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-hb957 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-lrnxn -- nslookup kubernetes.default
ha_test.go:181: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-lrnxn -- nslookup kubernetes.default: exit status 1 (129.511028ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-lrnxn does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:183: Pod busybox-fc5497c4f-lrnxn could not resolve 'kubernetes.default': exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-fwxpx -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-fwxpx -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (128.111405ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-fwxpx does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-fwxpx could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-hb957 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-lrnxn -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-lrnxn -- nslookup kubernetes.default.svc.cluster.local: exit status 1 (129.685683ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-lrnxn does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:191: Pod busybox-fc5497c4f-lrnxn could not resolve local service (kubernetes.default.svc.cluster.local): exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/DeployApp FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/DeployApp]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.239941935s)
helpers_test.go:252: TestMultiControlPlane/serial/DeployApp logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |      Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	| delete  | -p functional-692000                 | functional-692000 | jenkins | v1.33.0 | 29 Apr 24 12:25 PDT | 29 Apr 24 12:25 PDT |
	| start   | -p ha-431000 --wait=true             | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:25 PDT |                     |
	|         | --memory=2200 --ha                   |                   |         |         |                     |                     |
	|         | -v=7 --alsologtostderr               |                   |         |         |                     |                     |
	|         | --driver=hyperkit                    |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- apply -f             | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:27 PDT | 29 Apr 24 12:27 PDT |
	|         | ./testdata/ha/ha-pod-dns-test.yaml   |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- rollout status       | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:27 PDT |                     |
	|         | deployment/busybox                   |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:38 PDT | 29 Apr 24 12:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:38 PDT | 29 Apr 24 12:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.io               |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn --           |                   |         |         |                     |                     |
	|         | nslookup kubernetes.default          |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000         | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn -- nslookup  |                   |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |                   |         |         |                     |                     |
	|---------|--------------------------------------|-------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:25:07
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:25:07.932524   17722 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:25:07.932931   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.932939   17722 out.go:304] Setting ErrFile to fd 2...
	I0429 12:25:07.932945   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.933219   17722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:25:07.935193   17722 out.go:298] Setting JSON to false
	I0429 12:25:07.966866   17722 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10477,"bootTime":1714408230,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 12:25:07.967005   17722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:25:07.990175   17722 out.go:177] * [ha-431000] minikube v1.33.0 on Darwin 14.4.1
	I0429 12:25:08.073413   17722 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 12:25:08.048486   17722 notify.go:220] Checking for updates...
	I0429 12:25:08.160225   17722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:08.223714   17722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 12:25:08.245577   17722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:25:08.266426   17722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.296555   17722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:25:08.318007   17722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:25:08.348195   17722 out.go:177] * Using the hyperkit driver based on user configuration
	I0429 12:25:08.391518   17722 start.go:297] selected driver: hyperkit
	I0429 12:25:08.391552   17722 start.go:901] validating driver "hyperkit" against <nil>
	I0429 12:25:08.391574   17722 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:25:08.395979   17722 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.396124   17722 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18774-15855/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0429 12:25:08.404644   17722 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0429 12:25:08.408717   17722 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.408755   17722 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0429 12:25:08.408783   17722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:25:08.409041   17722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:25:08.409091   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:08.409101   17722 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 12:25:08.409113   17722 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:25:08.409179   17722 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:08.409294   17722 iso.go:125] acquiring lock: {Name:mk81eeca500d1c63233c315450a8b707db3bb810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.451413   17722 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0429 12:25:08.472458   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:08.472552   17722 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 12:25:08.472575   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:08.472808   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:08.472832   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:08.473307   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:08.473347   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json: {Name:mk35a218377c5d37033d174261b78cb0d686fe80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:08.474156   17722 start.go:360] acquireMachinesLock for ha-431000: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:08.474283   17722 start.go:364] duration metric: took 104.507µs to acquireMachinesLock for "ha-431000"
	I0429 12:25:08.474332   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:08.474420   17722 start.go:125] createHost starting for "" (driver="hyperkit")
	I0429 12:25:08.516479   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:08.516777   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.516851   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:08.526733   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61961
	I0429 12:25:08.527153   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:08.527581   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:08.527592   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:08.527830   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:08.527946   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:08.528043   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:08.528148   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:08.528173   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:08.528213   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:08.528263   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528280   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528335   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:08.528371   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528388   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528404   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:08.528414   17722 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0429 12:25:08.528488   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.528641   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:08.529147   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:08.529155   17722 main.go:141] libmachine: (ha-431000) Calling .Create
	I0429 12:25:08.529220   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.529331   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.529213   17730 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.529403   17722 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:08.704997   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.704900   17730 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa...
	I0429 12:25:08.942793   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.942700   17730 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0429 12:25:08.942814   17722 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0429 12:25:08.942822   17722 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0429 12:25:08.943194   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.943155   17730 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000 ...
	I0429 12:25:09.300525   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.300543   17722 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid
	I0429 12:25:09.300607   17722 main.go:141] libmachine: (ha-431000) DBG | Using UUID 24bdec35-56f8-46dd-965c-91e20eb4b3f3
	I0429 12:25:09.409075   17722 main.go:141] libmachine: (ha-431000) DBG | Generated MAC 12:79:5a:6e:b2:5a
	I0429 12:25:09.409094   17722 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:09.409125   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409150   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409187   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "24bdec35-56f8-46dd-965c-91e20eb4b3f3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:09.409213   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 24bdec35-56f8-46dd-965c-91e20eb4b3f3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:09.409223   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:09.412103   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Pid is 17733
	I0429 12:25:09.412560   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0429 12:25:09.412590   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.412663   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:09.413636   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:09.413744   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:09.413764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:09.413783   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:09.413800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:09.413811   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:09.413824   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:09.413850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:09.413862   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:09.413875   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:09.413883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:09.413914   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:09.413927   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:09.413957   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:09.413970   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:09.413986   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:09.413992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:09.414001   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:09.414010   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:09.414024   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:09.414041   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:09.414056   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:09.414068   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:09.414082   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:09.414098   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:09.414114   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:09.414144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:09.414157   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:09.414169   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:09.414178   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:09.414187   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:09.414196   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:09.414206   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:09.414215   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:09.414231   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:09.414248   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:09.414272   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:09.414288   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:09.414300   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:09.414315   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:09.414327   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:09.414339   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:09.414350   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:09.414363   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:09.414382   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:09.414397   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:09.414414   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:09.414428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:09.414441   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:09.414455   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:09.414467   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:09.414490   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:09.414509   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:09.419450   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:09.470495   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:09.471088   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.471110   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.471118   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.471128   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.850480   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:09.850496   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:09.966742   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.966763   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.966770   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.966776   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.967629   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:09.967639   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:25:11.414538   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0429 12:25:11.414553   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:11.414604   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:11.415399   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:11.415486   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:11.415495   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:11.415504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:11.415511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:11.415518   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:11.415539   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:11.415546   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:11.415584   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:11.415593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:11.415606   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:11.415614   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:11.415624   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:11.415633   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:11.415641   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:11.415647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:11.415654   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:11.415660   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:11.415667   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:11.415676   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:11.415682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:11.415689   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:11.415695   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:11.415702   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:11.415709   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:11.415714   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:11.415722   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:11.415729   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:11.415749   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:11.415755   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:11.415763   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:11.415771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:11.415782   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:11.415790   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:11.415798   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:11.415804   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:11.415809   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:11.415816   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:11.415822   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:11.415828   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:11.415836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:11.415844   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:11.415850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:11.415858   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:11.415864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:11.415872   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:11.415879   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:11.415886   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:11.415893   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:11.415898   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:11.415910   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:11.415924   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:11.415936   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:13.416614   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0429 12:25:13.416628   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:13.416689   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:13.417651   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:13.417739   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:13.417747   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:13.417765   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:13.417774   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:13.417791   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:13.417803   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:13.417810   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:13.417818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:13.417823   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:13.417830   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:13.417836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:13.417843   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:13.417849   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:13.417861   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:13.417870   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:13.417883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:13.417895   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:13.417902   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:13.417909   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:13.417922   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:13.417935   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:13.417949   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:13.417955   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:13.417969   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:13.417980   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:13.417992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:13.418008   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:13.418016   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:13.418023   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:13.418031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:13.418038   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:13.418044   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:13.418052   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:13.418059   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:13.418065   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:13.418083   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:13.418093   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:13.418101   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:13.418108   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:13.418116   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:13.418123   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:13.418130   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:13.418137   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:13.418144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:13.418151   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:13.418163   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:13.418181   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:13.418190   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:13.418198   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:13.418205   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:13.418212   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:13.418220   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:15.211585   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0429 12:25:15.211693   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0429 12:25:15.211704   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0429 12:25:15.236755   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0429 12:25:15.420156   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0429 12:25:15.420179   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:15.420320   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:15.421755   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:15.421973   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:15.421993   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:15.422007   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:15.422103   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:15.422117   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:15.422133   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:15.422156   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:15.422173   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:15.422185   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:15.422194   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:15.422211   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:15.422225   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:15.422235   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:15.422244   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:15.422263   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:15.422280   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:15.422297   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:15.422313   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:15.422336   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:15.422353   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:15.422369   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:15.422380   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:15.422393   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:15.422404   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:15.422412   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:15.422424   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:15.422432   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:15.422442   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:15.422456   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:15.422472   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:15.422492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:15.422511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:15.422535   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:15.422552   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:15.422563   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:15.422571   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:15.422581   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:15.422599   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:15.422609   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:15.422620   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:15.422628   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:15.422639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:15.422670   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:15.422682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:15.422700   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:15.422711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:15.422721   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:15.422732   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:15.422754   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:15.422771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:15.422797   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:15.422818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:17.422424   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0429 12:25:17.422440   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:17.422505   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:17.423336   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:17.423416   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:17.423428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:17.423437   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:17.423446   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:17.423453   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:17.423470   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:17.423477   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:17.423484   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:17.423492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:17.423498   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:17.423504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:17.423514   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:17.423519   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:17.423532   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:17.423542   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:17.423561   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:17.423570   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:17.423577   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:17.423593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:17.423600   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:17.423608   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:17.423615   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:17.423621   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:17.423627   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:17.423634   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:17.423639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:17.423647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:17.423653   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:17.423666   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:17.423674   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:17.423681   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:17.423696   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:17.423704   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:17.423711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:17.423718   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:17.423726   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:17.423733   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:17.423741   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:17.423750   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:17.423758   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:17.423764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:17.423770   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:17.423780   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:17.423792   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:17.423800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:17.423808   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:17.423815   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:17.423821   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:17.423838   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:17.423848   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:17.423855   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:17.423864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:19.424012   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0429 12:25:19.424040   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.424118   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.424931   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:19.425019   17722 main.go:141] libmachine: (ha-431000) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:19.425031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:19.425039   17722 main.go:141] libmachine: (ha-431000) DBG | Found match: 12:79:5a:6e:b2:5a
	I0429 12:25:19.425043   17722 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.53
	I0429 12:25:19.425100   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:19.425750   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425854   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425968   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:25:19.425977   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:19.426076   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.426132   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.426957   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:25:19.426968   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:25:19.426974   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:25:19.426979   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:19.427082   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:19.427181   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427279   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427395   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:19.427528   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:19.427739   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:19.427747   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:25:20.480764   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.480777   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:25:20.480782   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.480910   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.481005   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481084   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481169   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.481302   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.481440   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.481448   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:25:20.531217   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:25:20.531271   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:25:20.531277   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:25:20.531282   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531410   17722 buildroot.go:166] provisioning hostname "ha-431000"
	I0429 12:25:20.531421   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531516   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.531611   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.531708   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531810   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531912   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.532028   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.532159   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.532169   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0429 12:25:20.591606   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0429 12:25:20.591626   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.591764   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.591878   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.591967   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.592068   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.592207   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.592350   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.592361   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:25:20.648101   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.648121   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:25:20.648135   17722 buildroot.go:174] setting up certificates
	I0429 12:25:20.648144   17722 provision.go:84] configureAuth start
	I0429 12:25:20.648150   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.648287   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:20.648383   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.648467   17722 provision.go:143] copyHostCerts
	I0429 12:25:20.648496   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648561   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:25:20.648570   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648715   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:25:20.648916   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.648956   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:25:20.648961   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.649049   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:25:20.649197   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649236   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:25:20.649241   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649330   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:25:20.649473   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.53 ha-431000 localhost minikube]
	I0429 12:25:20.983440   17722 provision.go:177] copyRemoteCerts
	I0429 12:25:20.983549   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:25:20.983566   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.983728   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.983833   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.983973   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.984120   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:21.014934   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:25:21.015004   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:25:21.034390   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:25:21.034453   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 12:25:21.055272   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:25:21.055342   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:25:21.074968   17722 provision.go:87] duration metric: took 426.81697ms to configureAuth
	I0429 12:25:21.074981   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:25:21.075129   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:21.075141   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:21.075289   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.075381   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.075460   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075546   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075623   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.075741   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.075864   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.075872   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:25:21.124122   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:25:21.124135   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:25:21.124211   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:25:21.124224   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.124366   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.124464   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124564   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124654   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.124800   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.124944   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.124987   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:25:21.192987   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:25:21.193010   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.193155   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.193259   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193352   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193439   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.193576   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.193725   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.193737   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:25:22.738902   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:25:22.738918   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:25:22.738926   17722 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0429 12:25:22.739069   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:25:22.739077   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:25:22.739081   17722 client.go:171] duration metric: took 14.211068638s to LocalClient.Create
	I0429 12:25:22.739093   17722 start.go:167] duration metric: took 14.211111126s to libmachine.API.Create "ha-431000"
	I0429 12:25:22.739099   17722 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0429 12:25:22.739106   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:25:22.739115   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.739266   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:25:22.739278   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.739375   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.739470   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.739572   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.739667   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.770970   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:25:22.774135   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:25:22.774148   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:25:22.774248   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:25:22.774464   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:25:22.774471   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:25:22.774683   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:25:22.781634   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:22.801443   17722 start.go:296] duration metric: took 62.337848ms for postStartSetup
	I0429 12:25:22.801469   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:22.802046   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.802212   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:22.802533   17722 start.go:128] duration metric: took 14.328253816s to createHost
	I0429 12:25:22.802553   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.802653   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.802747   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802827   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802911   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.803023   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:22.803149   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:22.803156   17722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:25:22.853273   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418723.058219257
	
	I0429 12:25:22.853285   17722 fix.go:216] guest clock: 1714418723.058219257
	I0429 12:25:22.853294   17722 fix.go:229] Guest: 2024-04-29 12:25:23.058219257 -0700 PDT Remote: 2024-04-29 12:25:22.80254 -0700 PDT m=+14.919884293 (delta=255.679257ms)
	I0429 12:25:22.853306   17722 fix.go:200] guest clock delta is within tolerance: 255.679257ms
	I0429 12:25:22.853309   17722 start.go:83] releasing machines lock for "ha-431000", held for 14.379182495s
	I0429 12:25:22.853329   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853469   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.853555   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853914   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854029   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854103   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:25:22.854130   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854195   17722 ssh_runner.go:195] Run: cat /version.json
	I0429 12:25:22.854205   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854229   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854325   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854339   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854430   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854457   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854550   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.854559   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854669   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.880227   17722 ssh_runner.go:195] Run: systemctl --version
	I0429 12:25:22.884904   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:25:23.083357   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:25:23.083479   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:25:23.099007   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:25:23.099018   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.099124   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.114251   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:25:23.124072   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:25:23.133303   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.133353   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:25:23.142358   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.151274   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:25:23.160219   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.169150   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:25:23.178221   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:25:23.187855   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:25:23.200187   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:25:23.215042   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:25:23.229984   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:25:23.239505   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.358187   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:25:23.377452   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.377536   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:25:23.390115   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.402150   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:25:23.418123   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.430203   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.441283   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:25:23.508378   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.519755   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.536893   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:25:23.539939   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:25:23.548090   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:25:23.561479   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:25:23.659092   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:25:23.770465   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.770541   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:25:23.785669   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.879272   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:26.127907   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.248642004s)
	I0429 12:25:26.127971   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 12:25:26.139541   17722 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0429 12:25:26.153363   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.163941   17722 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 12:25:26.260524   17722 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 12:25:26.359754   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.473092   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 12:25:26.486634   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.497861   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.593975   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 12:25:26.650473   17722 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 12:25:26.650551   17722 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 12:25:26.655442   17722 start.go:562] Will wait 60s for crictl version
	I0429 12:25:26.655497   17722 ssh_runner.go:195] Run: which crictl
	I0429 12:25:26.658642   17722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:25:26.687124   17722 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 12:25:26.687194   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.703960   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.743708   17722 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 12:25:26.743769   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:26.744198   17722 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0429 12:25:26.748889   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:26.759848   17722 kubeadm.go:877] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:25:26.759910   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:26.759963   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:26.778002   17722 docker.go:685] Got preloaded images: 
	I0429 12:25:26.778015   17722 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 12:25:26.778063   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:26.792529   17722 ssh_runner.go:195] Run: which lz4
	I0429 12:25:26.795405   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 12:25:26.795520   17722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 12:25:26.798617   17722 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:25:26.798635   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 12:25:27.765888   17722 docker.go:649] duration metric: took 970.420369ms to copy over tarball
	I0429 12:25:27.765960   17722 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:25:30.567839   17722 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801895501s)
	I0429 12:25:30.567854   17722 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:25:30.595797   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:30.604930   17722 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 12:25:30.618739   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:30.724313   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:32.998664   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.274358101s)
	I0429 12:25:32.998761   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:33.010141   17722 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 12:25:33.010164   17722 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:25:33.010175   17722 kubeadm.go:928] updating node { 192.169.0.53 8443 v1.30.0 docker true true} ...
	I0429 12:25:33.010258   17722 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:25:33.010327   17722 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 12:25:33.028263   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:33.028276   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:33.028288   17722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:25:33.028307   17722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.53 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:25:33.028401   17722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:25:33.028424   17722 kube-vip.go:115] generating kube-vip config ...
	I0429 12:25:33.028473   17722 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:25:33.041730   17722 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:25:33.041824   17722 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:25:33.041883   17722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:25:33.049658   17722 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:25:33.049723   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 12:25:33.057238   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 12:25:33.070533   17722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:25:33.084130   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 12:25:33.098615   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0429 12:25:33.112003   17722 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:25:33.115112   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:33.124648   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:33.220871   17722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:25:33.235322   17722 certs.go:68] Setting up /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000 for IP: 192.169.0.53
	I0429 12:25:33.235337   17722 certs.go:194] generating shared ca certs ...
	I0429 12:25:33.235350   17722 certs.go:226] acquiring lock for ca certs: {Name:mk18b421bd78dac9f18e716e2a7f75e3dd2eab02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.235540   17722 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key
	I0429 12:25:33.235613   17722 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key
	I0429 12:25:33.235624   17722 certs.go:256] generating profile certs ...
	I0429 12:25:33.235671   17722 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key
	I0429 12:25:33.235685   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0429 12:25:33.339009   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt ...
	I0429 12:25:33.339025   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt: {Name:mkb1110c9ecfb391fd480dbb43740124cada1f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339372   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key ...
	I0429 12:25:33.339380   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key: {Name:mkd77fb4d6c3d30f48c31a52650895ec059a5da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339601   17722 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9
	I0429 12:25:33.339619   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.53 192.169.0.254]
	I0429 12:25:33.423656   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 ...
	I0429 12:25:33.423669   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9: {Name:mk814e943097b224c38c531359e60f0e011c83a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.454432   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 ...
	I0429 12:25:33.454489   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9: {Name:mkf400b8f0ffe83cfed5eaf2cbd6dc5c6a80d173 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.455578   17722 certs.go:381] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt
	I0429 12:25:33.455838   17722 certs.go:385] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key
	I0429 12:25:33.456077   17722 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key
	I0429 12:25:33.456094   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0429 12:25:33.747139   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0429 12:25:33.747160   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mk16d91ff8fca22baa28a0d26524b0eab36376d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747480   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key ...
	I0429 12:25:33.747491   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key: {Name:mk54d72d806a4c855d8c1362244d32326e1573f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747702   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:25:33.747734   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:25:33.747753   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:25:33.747772   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:25:33.747790   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:25:33.747808   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:25:33.747826   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:25:33.747844   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:25:33.747942   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem (1338 bytes)
	W0429 12:25:33.747991   17722 certs.go:480] ignoring /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0429 12:25:33.747999   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 12:25:33.748029   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem (1082 bytes)
	I0429 12:25:33.748058   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:25:33.748086   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem (1675 bytes)
	I0429 12:25:33.748159   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:33.748192   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:33.748211   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem -> /usr/share/ca-certificates/16301.pem
	I0429 12:25:33.748229   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /usr/share/ca-certificates/163012.pem
	I0429 12:25:33.748622   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:25:33.769199   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 12:25:33.789288   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:25:33.809631   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:25:33.829028   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 12:25:33.848280   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:25:33.867787   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:25:33.886985   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:25:33.908803   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:25:33.936146   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0429 12:25:33.955589   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0429 12:25:33.976759   17722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:25:33.990981   17722 ssh_runner.go:195] Run: openssl version
	I0429 12:25:33.995277   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0429 12:25:34.003661   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007097   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 19:19 /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007137   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.011417   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:25:34.019724   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:25:34.027921   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031378   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031414   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.035640   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:25:34.044197   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0429 12:25:34.052743   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056299   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 19:19 /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056347   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.060683   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0429 12:25:34.069054   17722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:25:34.072258   17722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:25:34.072301   17722 kubeadm.go:391] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:34.072393   17722 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 12:25:34.083740   17722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:25:34.091194   17722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:25:34.098727   17722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:25:34.106172   17722 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:25:34.106181   17722 kubeadm.go:156] found existing configuration files:
	
	I0429 12:25:34.106220   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:25:34.113138   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:25:34.113180   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:25:34.120473   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:25:34.127438   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:25:34.127477   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:25:34.134756   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.141743   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:25:34.141780   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.149046   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:25:34.156264   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:25:34.156318   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:25:34.163835   17722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:25:34.237910   17722 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:25:34.237955   17722 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:25:34.323023   17722 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:25:34.323108   17722 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:25:34.323194   17722 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:25:34.504766   17722 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:25:34.536903   17722 out.go:204]   - Generating certificates and keys ...
	I0429 12:25:34.536959   17722 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:25:34.537016   17722 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:25:34.879309   17722 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:25:34.943072   17722 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:25:35.013417   17722 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:25:35.182918   17722 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:25:35.261483   17722 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:25:35.261590   17722 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.419895   17722 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:25:35.420010   17722 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.479554   17722 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:25:35.607018   17722 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:25:35.903671   17722 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:25:35.904031   17722 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:25:36.142024   17722 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:25:36.288455   17722 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:25:36.556621   17722 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:25:36.634522   17722 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:25:36.899304   17722 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:25:36.899717   17722 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:25:36.902090   17722 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:25:36.923478   17722 out.go:204]   - Booting up control plane ...
	I0429 12:25:36.923546   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:25:36.923600   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:25:36.923664   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:25:36.923740   17722 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:25:36.923808   17722 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:25:36.923851   17722 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:25:37.025954   17722 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:25:37.026032   17722 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:25:38.035770   17722 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.010128476s
	I0429 12:25:38.035905   17722 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:25:43.764081   17722 kubeadm.go:309] [api-check] The API server is healthy after 5.731795501s
	I0429 12:25:43.772560   17722 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:25:43.789869   17722 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:25:43.805688   17722 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:25:43.805836   17722 kubeadm.go:309] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:25:43.811946   17722 kubeadm.go:309] [bootstrap-token] Using token: m6uvir.d4h53m26b1o2593q
	I0429 12:25:43.849413   17722 out.go:204]   - Configuring RBAC rules ...
	I0429 12:25:43.849574   17722 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:25:43.852729   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:25:43.892587   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:25:43.894879   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:25:43.896753   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:25:43.898705   17722 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:25:44.168425   17722 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:25:44.581231   17722 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:25:45.167548   17722 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:25:45.168144   17722 kubeadm.go:309] 
	I0429 12:25:45.168196   17722 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:25:45.168206   17722 kubeadm.go:309] 
	I0429 12:25:45.168263   17722 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:25:45.168268   17722 kubeadm.go:309] 
	I0429 12:25:45.168291   17722 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:25:45.168346   17722 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:25:45.168395   17722 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:25:45.168404   17722 kubeadm.go:309] 
	I0429 12:25:45.168448   17722 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:25:45.168454   17722 kubeadm.go:309] 
	I0429 12:25:45.168486   17722 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:25:45.168492   17722 kubeadm.go:309] 
	I0429 12:25:45.168531   17722 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:25:45.168592   17722 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:25:45.168639   17722 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:25:45.168643   17722 kubeadm.go:309] 
	I0429 12:25:45.168723   17722 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:25:45.168792   17722 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:25:45.168798   17722 kubeadm.go:309] 
	I0429 12:25:45.168870   17722 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.168957   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 \
	I0429 12:25:45.168980   17722 kubeadm.go:309] 	--control-plane 
	I0429 12:25:45.168984   17722 kubeadm.go:309] 
	I0429 12:25:45.169055   17722 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:25:45.169064   17722 kubeadm.go:309] 
	I0429 12:25:45.169140   17722 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.169225   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 
	I0429 12:25:45.169327   17722 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:25:45.169339   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:45.169343   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:45.192418   17722 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 12:25:45.250402   17722 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 12:25:45.254278   17722 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 12:25:45.254288   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 12:25:45.273385   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 12:25:45.493620   17722 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:25:45.493687   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.493694   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_04_29T12_25_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0429 12:25:45.612967   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.644835   17722 ops.go:34] apiserver oom_adj: -16
	I0429 12:25:46.113178   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:46.613113   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.114140   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.613848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.113291   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.613593   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.113154   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.613813   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.113663   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.613520   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.113510   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.613048   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.113848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.614240   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.113945   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.615040   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.113136   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.614312   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.113432   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.614732   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.113281   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.613014   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.114177   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.613453   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.726340   17722 kubeadm.go:1107] duration metric: took 12.232843816s to wait for elevateKubeSystemPrivileges
	W0429 12:25:57.726368   17722 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:25:57.726373   17722 kubeadm.go:393] duration metric: took 23.654349307s to StartCluster
	I0429 12:25:57.726388   17722 settings.go:142] acquiring lock: {Name:mk4619a2e41059be70c5979af83844e654a88068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.726480   17722 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.726968   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/kubeconfig: {Name:mkea524d41d79b030c40cdcb50fe58e264b35b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.727247   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:25:57.727263   17722 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:57.727278   17722 start.go:240] waiting for startup goroutines ...
	I0429 12:25:57.727289   17722 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:25:57.727325   17722 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0429 12:25:57.727331   17722 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0429 12:25:57.727351   17722 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0429 12:25:57.727356   17722 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0429 12:25:57.727369   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.727424   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:57.727601   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727602   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727616   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.727617   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.736886   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61984
	I0429 12:25:57.737295   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.737678   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.737690   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.737981   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61986
	I0429 12:25:57.738074   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.738331   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.738533   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.738579   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.738658   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.738671   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.739264   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.739478   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.739646   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.739702   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.741554   17722 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.741775   17722 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdd3cee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:25:57.742264   17722 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 12:25:57.742421   17722 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0429 12:25:57.742443   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.742660   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.742677   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.747696   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61988
	I0429 12:25:57.748147   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.748482   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.748498   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.748745   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.748874   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.748975   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.749041   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.750035   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.788044   17722 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:25:57.751748   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61990
	I0429 12:25:57.825563   17722 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.825579   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:25:57.825595   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.825776   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.825980   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.826026   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.826183   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.826334   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.826500   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.826516   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.826839   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.827237   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.827262   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.835538   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:25:57.836072   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61993
	I0429 12:25:57.836372   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.836712   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.836731   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.836941   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.837051   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.837149   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.837235   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.838190   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.838373   17722 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:57.838382   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:25:57.838392   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.838527   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.838615   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.838701   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.838792   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.921992   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.963950   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:58.239892   17722 start.go:946] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0429 12:25:58.350022   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350036   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350091   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350105   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350223   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350224   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350239   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350250   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350258   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350278   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350288   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350302   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350302   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350309   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350428   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350439   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350445   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350487   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350497   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350532   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350553   17722 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 12:25:58.350558   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.350566   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.350571   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.356148   17722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:25:58.356671   17722 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 12:25:58.356684   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.356690   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.356695   17722 round_trippers.go:473]     Content-Type: application/json
	I0429 12:25:58.356699   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.359632   17722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:25:58.359767   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.359777   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.359915   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.359935   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.359942   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.383304   17722 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 12:25:58.425404   17722 addons.go:505] duration metric: took 698.121896ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 12:25:58.425460   17722 start.go:245] waiting for cluster config update ...
	I0429 12:25:58.425487   17722 start.go:254] writing updated cluster config ...
	I0429 12:25:58.541992   17722 out.go:177] 
	I0429 12:25:58.562923   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:58.563008   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.583885   17722 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0429 12:25:58.624732   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:58.624767   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:58.624960   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:58.624976   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:58.625077   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.625802   17722 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:58.625882   17722 start.go:364] duration metric: took 63.962µs to acquireMachinesLock for "ha-431000-m02"
	I0429 12:25:58.625908   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:58.625983   17722 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0429 12:25:58.647279   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:58.647410   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:58.647441   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:58.657607   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61997
	I0429 12:25:58.657937   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:58.658289   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:58.658303   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:58.658522   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:58.658625   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:25:58.658714   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:25:58.658812   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:58.658829   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:58.658858   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:58.658895   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658904   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658941   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:58.658966   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658976   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658987   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:58.658993   17722 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0429 12:25:58.659060   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.659082   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:25:58.684517   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:58.684548   17722 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0429 12:25:58.684719   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.685030   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.684708   17751 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:58.685090   17722 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:58.880461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.880334   17751 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa...
	I0429 12:25:58.972148   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972072   17751 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0429 12:25:58.972171   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0429 12:25:58.972194   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0429 12:25:58.972426   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972393   17751 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02 ...
	I0429 12:25:59.330149   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.330167   17722 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0429 12:25:59.330184   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID 9365fc26-c2e1-4d51-b9c4-d17102d62f4c
	I0429 12:25:59.355313   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC ca:19:33:23:af:66
	I0429 12:25:59.355328   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:59.355356   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355382   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355419   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9365fc26-c2e1-4d51-b9c4-d17102d62f4c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/
machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:59.355452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9365fc26-c2e1-4d51-b9c4-d17102d62f4c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:59.355465   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:59.358434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Pid is 17752
	I0429 12:25:59.359551   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0429 12:25:59.359562   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.359638   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:25:59.360574   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:25:59.360681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:59.360697   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:59.360720   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:59.360739   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:59.360752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:59.360762   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:59.360776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:59.360790   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:59.360800   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:59.360809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:59.360829   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:59.360841   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:59.360856   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:59.360868   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:59.360879   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:59.360894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:59.360903   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:59.360932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:59.360940   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:59.360949   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:59.360958   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:59.360966   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:59.360973   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:59.360981   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:59.360987   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:59.360995   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:59.361001   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:59.361007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:59.361029   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:59.361043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:59.361055   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:59.361062   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:59.361069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:59.361077   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:59.361085   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:59.361094   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:59.361102   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:59.361111   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:59.361122   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:59.361132   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:59.361139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:59.361146   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:59.361155   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:59.361168   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:59.361180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:59.361200   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:59.361215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:59.361230   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:59.361243   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:59.361254   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:59.361267   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:59.361281   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:59.361294   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:59.365988   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:59.374154   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:59.374983   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.375007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.375049   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.375071   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.757805   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:59.757822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:59.872427   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.872442   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.872451   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.872459   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.873348   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:59.873361   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:26:01.362461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0429 12:26:01.362478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:01.362589   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:01.363375   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:01.363463   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:01.363472   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:01.363482   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:01.363495   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:01.363505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:01.363523   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:01.363540   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:01.363555   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:01.363564   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:01.363570   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:01.363580   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:01.363585   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:01.363594   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:01.363601   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:01.363606   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:01.363615   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:01.363620   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:01.363627   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:01.363641   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:01.363649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:01.363656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:01.363664   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:01.363671   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:01.363678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:01.363685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:01.363691   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:01.363698   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:01.363706   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:01.363715   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:01.363729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:01.363745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:01.363752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:01.363758   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:01.363763   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:01.363770   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:01.363779   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:01.363786   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:01.363791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:01.363806   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:01.363819   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:01.363827   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:01.363833   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:01.363839   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:01.363847   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:01.363854   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:01.363862   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:01.363876   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:01.363887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:01.363897   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:01.363909   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:01.363929   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:01.363942   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:01.363951   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:03.364674   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0429 12:26:03.364689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:03.364761   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:03.365538   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:03.365618   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:03.365628   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:03.365649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:03.365656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:03.365663   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:03.365668   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:03.365681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:03.365689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:03.365701   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:03.365714   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:03.365721   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:03.365734   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:03.365745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:03.365754   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:03.365760   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:03.365766   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:03.365776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:03.365783   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:03.365789   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:03.365795   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:03.365802   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:03.365809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:03.365820   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:03.365828   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:03.365835   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:03.365840   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:03.365850   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:03.365857   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:03.365871   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:03.365880   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:03.365887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:03.365894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:03.365906   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:03.365922   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:03.365930   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:03.365939   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:03.365955   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:03.365964   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:03.365971   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:03.365979   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:03.365986   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:03.365993   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:03.366000   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:03.366008   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:03.366015   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:03.366022   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:03.366028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:03.366035   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:03.366043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:03.366050   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:03.366067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:03.366079   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:03.366088   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:05.177799   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0429 12:26:05.177845   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0429 12:26:05.177853   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0429 12:26:05.201640   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0429 12:26:05.367080   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0429 12:26:05.367108   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:05.367266   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:05.368694   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:05.368836   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:05.368855   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:05.368872   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:05.368902   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:05.368917   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:05.368932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:05.368944   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:05.368959   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:05.368998   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:05.369028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:05.369054   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:05.369067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:05.369081   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:05.369089   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:05.369107   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:05.369129   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:05.369139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:05.369150   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:05.369170   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:05.369187   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:05.369198   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:05.369209   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:05.369224   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:05.369237   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:05.369248   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:05.369260   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:05.369269   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:05.369280   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:05.369290   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:05.369300   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:05.369309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:05.369317   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:05.369342   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:05.369360   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:05.369371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:05.369383   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:05.369391   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:05.369402   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:05.369412   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:05.369420   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:05.369429   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:05.369437   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:05.369446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:05.369456   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:05.369467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:05.369476   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:05.369491   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:05.369505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:05.369516   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:05.369528   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:05.369539   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:05.369549   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:05.369556   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:07.369177   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0429 12:26:07.369192   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:07.369292   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:07.370069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:07.370166   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:07.370180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:07.370193   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:07.370204   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:07.370215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:07.370228   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:07.370247   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:07.370259   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:07.370274   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:07.370287   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:07.370296   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:07.370304   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:07.370309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:07.370315   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:07.370344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:07.370354   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:07.370363   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:07.370371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:07.370384   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:07.370397   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:07.370403   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:07.370411   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:07.370418   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:07.370423   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:07.370438   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:07.370446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:07.370452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:07.370467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:07.370478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:07.370508   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:07.370534   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:07.370550   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:07.370567   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:07.370582   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:07.370595   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:07.370607   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:07.370629   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:07.370639   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:07.370646   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:07.370659   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:07.370669   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:07.370678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:07.370685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:07.370695   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:07.370729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:07.370740   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:07.370745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:07.370768   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:07.370775   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:07.370785   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:07.370791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:07.370807   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:07.370822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:09.370416   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0429 12:26:09.370434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.370529   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.371299   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:09.371370   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0429 12:26:09.371380   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:ca:19:33:23:af:66 ID:1,ca:19:33:23:af:66 Lease:0x663145d0}
	I0429 12:26:09.371387   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: ca:19:33:23:af:66
	I0429 12:26:09.371394   17722 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.54
	I0429 12:26:09.371443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:09.371998   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372094   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372184   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:26:09.372192   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0429 12:26:09.372282   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.372344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.373081   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:26:09.373088   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:26:09.373091   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:26:09.373096   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:09.373185   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:09.373265   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373344   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373427   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:09.373540   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:09.373715   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:09.373722   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:26:09.398851   17722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0429 12:26:12.461040   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.461055   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:26:12.461061   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.461182   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.461292   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461377   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461470   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.461607   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.461746   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.461754   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:26:12.523397   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:26:12.523438   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:26:12.523445   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:26:12.523450   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523578   17722 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0429 12:26:12.523587   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523662   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.523743   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.523829   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.523933   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.524027   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.524160   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.524319   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.524328   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0429 12:26:12.595067   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0429 12:26:12.595081   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.595219   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.595311   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595396   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595487   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.595609   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.595751   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.595763   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:26:12.665199   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.665226   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:26:12.665245   17722 buildroot.go:174] setting up certificates
	I0429 12:26:12.665255   17722 provision.go:84] configureAuth start
	I0429 12:26:12.665262   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.665398   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:12.665483   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.665568   17722 provision.go:143] copyHostCerts
	I0429 12:26:12.665595   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665640   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:26:12.665646   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665779   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:26:12.665976   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666005   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:26:12.666009   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666094   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:26:12.666237   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666265   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:26:12.666269   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666334   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:26:12.666486   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.54 ha-431000-m02 localhost minikube]
	I0429 12:26:12.788915   17722 provision.go:177] copyRemoteCerts
	I0429 12:26:12.788965   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:26:12.788978   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.789131   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.789238   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.789318   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.789407   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:12.827773   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:26:12.827848   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:26:12.847569   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:26:12.847650   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:26:12.867106   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:26:12.867164   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:26:12.886794   17722 provision.go:87] duration metric: took 221.533222ms to configureAuth
	I0429 12:26:12.886806   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:26:12.886945   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:26:12.886958   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:12.887088   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.887189   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.887288   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887445   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.887559   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.887688   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.887695   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:26:12.949449   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:26:12.960507   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:26:12.960630   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:26:12.960650   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.960811   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.960918   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961031   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961144   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.961314   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.961496   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.961559   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.53"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:26:13.034441   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.53
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:26:13.034459   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:13.034584   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:13.034670   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034771   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034868   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:13.034990   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:13.035131   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:13.035143   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:26:14.527695   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:26:14.527713   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:26:14.527719   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0429 12:26:14.527859   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:26:14.527867   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:26:14.527872   17722 client.go:171] duration metric: took 15.869221462s to LocalClient.Create
	I0429 12:26:14.527883   17722 start.go:167] duration metric: took 15.869255705s to libmachine.API.Create "ha-431000"
	I0429 12:26:14.527888   17722 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0429 12:26:14.527895   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:26:14.527905   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.528059   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:26:14.528083   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.528190   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.528277   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.528371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.528462   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.571874   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:26:14.575715   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:26:14.575725   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:26:14.575820   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:26:14.575979   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:26:14.575985   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:26:14.576144   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:26:14.585193   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:26:14.617800   17722 start.go:296] duration metric: took 89.904272ms for postStartSetup
	I0429 12:26:14.617830   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:14.618443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.618595   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:26:14.619362   17722 start.go:128] duration metric: took 15.993555623s to createHost
	I0429 12:26:14.619378   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.619471   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.619554   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619637   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619749   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.619852   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:14.619986   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:14.619993   17722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:26:14.684608   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418773.901729701
	
	I0429 12:26:14.684618   17722 fix.go:216] guest clock: 1714418773.901729701
	I0429 12:26:14.684638   17722 fix.go:229] Guest: 2024-04-29 12:26:13.901729701 -0700 PDT Remote: 2024-04-29 12:26:14.619372 -0700 PDT m=+66.737316646 (delta=-717.642299ms)
	I0429 12:26:14.684650   17722 fix.go:200] guest clock delta is within tolerance: -717.642299ms
	I0429 12:26:14.684654   17722 start.go:83] releasing machines lock for "ha-431000-m02", held for 16.058950363s
	I0429 12:26:14.684671   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.684806   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.706148   17722 out.go:177] * Found network options:
	I0429 12:26:14.727219   17722 out.go:177]   - NO_PROXY=192.169.0.53
	W0429 12:26:14.749052   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.749119   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.749950   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750211   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750321   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:26:14.750357   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0429 12:26:14.750432   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.750551   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:26:14.750572   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.750573   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750783   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750814   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751054   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751092   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751333   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.751365   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751506   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0429 12:26:14.787217   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:26:14.787277   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:26:14.848201   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:26:14.848227   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:14.848358   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:14.865009   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:26:14.874483   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:26:14.883536   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:26:14.883585   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:26:14.892833   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.901853   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:26:14.910642   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.919451   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:26:14.928628   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:26:14.937504   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:26:14.946508   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:26:14.955539   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:26:14.963731   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:26:14.971817   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.075369   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:26:15.095140   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:15.095227   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:26:15.114227   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.131639   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:26:15.152206   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.163545   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.173829   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:26:15.196676   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.207120   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:15.222241   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:26:15.225235   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:26:15.232398   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:26:15.245911   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:26:15.342741   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:26:15.443507   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:26:15.443528   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:26:15.456917   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.554009   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:27:16.592140   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.038821319s)
	I0429 12:27:16.592205   17722 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 12:27:16.627966   17722 out.go:177] 
	W0429 12:27:16.650769   17722 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 19:26:12 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.562442558Z" level=info msg="Starting up"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.563076350Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.567928747Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=527
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.583689216Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597320910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597367066Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597410988Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597446593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597506031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597540413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597805772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597892122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597933427Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597944227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598009110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598199858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599844343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599884447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599970866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600003444Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600098323Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600162313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600172836Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602841911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602899413Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602913390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602923442Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602932201Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603001707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603176643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603272136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603306202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603317268Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603331817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603343600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603351894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603363377Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603373358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603384153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603392284Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603399425Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603411790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603424697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603433133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603441352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603448955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603456927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603465082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603473249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603488023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603499629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603507180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603514591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603523215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603533846Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603547413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603558028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603566468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603593185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603603060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603610561Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603617284Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603705135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603740552Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603750151Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603879153Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603936791Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603967392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603999139Z" level=info msg="containerd successfully booted in 0.021874s"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.594997677Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.598794560Z" level=info msg="Loading containers: start."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.706853403Z" level=info msg="Loading containers: done."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713747539Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713886174Z" level=info msg="Daemon has completed initialization"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.742027280Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 19:26:13 ha-431000-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.744424276Z" level=info msg="API listen on [::]:2376"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.784502897Z" level=info msg="Processing signal 'terminated'"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785439200Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785771501Z" level=info msg="Daemon shutdown complete"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785832350Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785935215Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 19:26:14 ha-431000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 dockerd[864]: time="2024-04-29T19:26:15.839252866Z" level=info msg="Starting up"
	Apr 29 19:27:16 ha-431000-m02 dockerd[864]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 12:27:16.650883   17722 out.go:239] * 
	W0429 12:27:16.652130   17722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 12:27:16.735182   17722 out.go:177] 
	
	
	==> Docker <==
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.652190959Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.653396462Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.680700376Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.680924340Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.681014604Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:26:07 ha-431000 dockerd[1119]: time="2024-04-29T19:26:07.681180406Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:27:18 ha-431000 dockerd[1113]: 2024/04/29 19:27:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:18 ha-431000 dockerd[1113]: 2024/04/29 19:27:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:18 ha-431000 dockerd[1113]: 2024/04/29 19:27:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:18 ha-431000 dockerd[1113]: 2024/04/29 19:27:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:18 ha-431000 dockerd[1113]: 2024/04/29 19:27:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:18 ha-431000 dockerd[1113]: 2024/04/29 19:27:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:19 ha-431000 dockerd[1113]: 2024/04/29 19:27:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:19 ha-431000 dockerd[1113]: 2024/04/29 19:27:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:19 ha-431000 dockerd[1113]: 2024/04/29 19:27:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:21 ha-431000 dockerd[1119]: time="2024-04-29T19:27:21.826378264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:27:21 ha-431000 dockerd[1119]: time="2024-04-29T19:27:21.826530023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:27:21 ha-431000 dockerd[1119]: time="2024-04-29T19:27:21.826603491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:27:21 ha-431000 dockerd[1119]: time="2024-04-29T19:27:21.826775463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:27:21 ha-431000 cri-dockerd[1017]: time="2024-04-29T19:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a77f2c4e9a00b659bd23a71b2e06a6597386312621d0ef4b80802241e48f11f3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 19:27:23 ha-431000 cri-dockerd[1017]: time="2024-04-29T19:27:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.131163360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.131274726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.131594744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.132740553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d991ad36f678a       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   a77f2c4e9a00b       busybox-fc5497c4f-hb957
	1eccc3b061df0       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   1d309f3bad709       coredns-7db6d8ff4d-42q4w
	5fc9c9934ce6d       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   b653ad03a91d0       coredns-7db6d8ff4d-vpt8j
	2fe73ca28b695       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   5910bbb306534       storage-provisioner
	2fc7f6cf26c66       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              13 minutes ago      Running             kindnet-cni               0                   77287d05e6e30       kindnet-7v2x2
	dd7e6dcf8c180       a0bf559e280cf                                                                                         13 minutes ago      Running             kube-proxy                0                   68ae02827d8f8       kube-proxy-24zxg
	25e21b3064d0f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     13 minutes ago      Running             kube-vip                  0                   eb02781b4cb9e       kube-vip-ha-431000
	253cb761306be       c42f13656d0b2                                                                                         13 minutes ago      Running             kube-apiserver            0                   0b29b8a237387       kube-apiserver-ha-431000
	8689b2595f852       c7aad43836fa5                                                                                         13 minutes ago      Running             kube-controller-manager   0                   9353c1ab6af44       kube-controller-manager-ha-431000
	395f732c4ecd8       259c8277fcbbc                                                                                         13 minutes ago      Running             kube-scheduler            0                   b5d69e14edefa       kube-scheduler-ha-431000
	f0af9c768f874       3861cfcd7c04c                                                                                         13 minutes ago      Running             etcd                      0                   d3b2bec8afdb2       etcd-ha-431000
	
	
	==> coredns [1eccc3b061df] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59271 - 28720 "HINFO IN 7673301142508958601.5024328155936603833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011541653s
	[INFO] 10.244.0.4:50854 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.054331652s
	[INFO] 10.244.0.4:59171 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.013535598s
	[INFO] 10.244.0.4:59678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205886s
	[INFO] 10.244.0.4:40138 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017345083s
	[INFO] 10.244.0.4:49479 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01082099s
	[INFO] 10.244.0.4:35756 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000317739s
	[INFO] 10.244.0.4:51111 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143818s
	[INFO] 10.244.0.4:50301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055524s
	
	
	==> coredns [5fc9c9934ce6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46682 - 61825 "HINFO IN 2083561514213073366.6289946638194081086. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012255792s
	[INFO] 10.244.0.4:44627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165757s
	[INFO] 10.244.0.4:54619 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.011057673s
	[INFO] 10.244.0.4:40906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000248422s
	[INFO] 10.244.0.4:38496 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162208s
	[INFO] 10.244.0.4:49726 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162889s
	[INFO] 10.244.0.4:53369 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076076s
	[INFO] 10.244.0.4:38793 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118909s
	[INFO] 10.244.0.4:38671 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049671s
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_25_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:25:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:39:10 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:26:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.53
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 0347e3fb0470420c86cb19dc5e37eb8c
	  System UUID:                24bd46dd-0000-0000-965c-91e20eb4b3f3
	  Boot ID:                    5a51c8ef-7deb-418c-a18b-22ccf6b75cab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hb957              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-7db6d8ff4d-42q4w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-vpt8j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-431000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-7v2x2                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-431000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-24zxg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-431000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-431000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-431000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.683718] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.250560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.108677] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +0.110997] systemd-fstab-generator[512]: Ignoring "noauto" option for root device
	[  +1.889303] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +0.059318] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.256507] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.100729] systemd-fstab-generator[782]: Ignoring "noauto" option for root device
	[  +0.117080] systemd-fstab-generator[796]: Ignoring "noauto" option for root device
	[  +2.384127] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	[  +0.098660] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.104412] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.132603] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +4.124699] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.056838] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.446594] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +3.800237] systemd-fstab-generator[1492]: Ignoring "noauto" option for root device
	[  +0.050917] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.212746] systemd-fstab-generator[1990]: Ignoring "noauto" option for root device
	[  +0.076591] kauditd_printk_skb: 72 callbacks suppressed
	[ +14.491572] kauditd_printk_skb: 12 callbacks suppressed
	[Apr29 19:26] kauditd_printk_skb: 38 callbacks suppressed
	[Apr29 19:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f0af9c768f87] <==
	{"level":"info","ts":"2024-04-29T19:25:39.452287Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9dc59dab1a1aba0","initial-advertise-peer-urls":["https://192.169.0.53:2380"],"listen-peer-urls":["https://192.169.0.53:2380"],"advertise-client-urls":["https://192.169.0.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:25:39.452539Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:25:39.452814Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.53:2380"}
	{"level":"info","ts":"2024-04-29T19:25:39.4529Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.53:2380"}
	{"level":"info","ts":"2024-04-29T19:25:40.22976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.229854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.22989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 received MsgPreVoteResp from 9dc59dab1a1aba0 at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.229907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.22992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 received MsgVoteResp from 9dc59dab1a1aba0 at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.229934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became leader at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.229975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc59dab1a1aba0 elected leader 9dc59dab1a1aba0 at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.232269Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9dc59dab1a1aba0","local-member-attributes":"{Name:ha-431000 ClientURLs:[https://192.169.0.53:2379]}","request-path":"/0/members/9dc59dab1a1aba0/attributes","cluster-id":"ab7009b4269bb52","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:25:40.232472Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:25:40.232823Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.233029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:25:40.233086Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:25:40.236119Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T19:25:40.237354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:25:40.237915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.53:2379"}
	{"level":"info","ts":"2024-04-29T19:25:40.284926Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ab7009b4269bb52","local-member-id":"9dc59dab1a1aba0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.285192Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.285242Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:35:40.294897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":967}
	{"level":"info","ts":"2024-04-29T19:35:40.297588Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":967,"took":"2.224451ms","hash":4246457979,"current-db-size-bytes":2424832,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2424832,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-29T19:35:40.297735Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4246457979,"revision":967,"compact-revision":-1}
	
	
	==> kernel <==
	 19:39:18 up 14 min,  0 users,  load average: 0.17, 0.14, 0.09
	Linux ha-431000 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2fc7f6cf26c6] <==
	I0429 19:37:13.068667       1 main.go:227] handling current node
	I0429 19:37:23.075833       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:37:23.075864       1 main.go:227] handling current node
	I0429 19:37:33.083407       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:37:33.083494       1 main.go:227] handling current node
	I0429 19:37:43.087668       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:37:43.087827       1 main.go:227] handling current node
	I0429 19:37:53.092400       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:37:53.092637       1 main.go:227] handling current node
	I0429 19:38:03.096843       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:03.096888       1 main.go:227] handling current node
	I0429 19:38:13.103825       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:13.103856       1 main.go:227] handling current node
	I0429 19:38:23.113180       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:23.113226       1 main.go:227] handling current node
	I0429 19:38:33.117375       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:33.117662       1 main.go:227] handling current node
	I0429 19:38:43.123849       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:43.124059       1 main.go:227] handling current node
	I0429 19:38:53.135472       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:53.135690       1 main.go:227] handling current node
	I0429 19:39:03.141037       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:03.141082       1 main.go:227] handling current node
	I0429 19:39:13.153671       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:13.153913       1 main.go:227] handling current node
	
	
	==> kube-apiserver [253cb761306b] <==
	I0429 19:25:41.298083       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0429 19:25:41.298355       1 controller.go:615] quota admission added evaluator for: namespaces
	I0429 19:25:41.298726       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:25:41.298734       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:25:41.298739       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:25:41.298742       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:25:41.341235       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:25:42.199810       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 19:25:42.203107       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 19:25:42.203598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:25:42.475929       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:25:42.501043       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:25:42.531435       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 19:25:42.536355       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.53]
	I0429 19:25:42.537008       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 19:25:42.539635       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 19:25:43.279622       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:25:44.788367       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:25:44.794388       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 19:25:44.803386       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:25:58.079997       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 19:25:58.240228       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 19:39:15.945064       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62063: use of closed network connection
	E0429 19:39:16.427565       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62071: use of closed network connection
	E0429 19:39:16.885095       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62079: use of closed network connection
	
	
	==> kube-controller-manager [8689b2595f85] <==
	I0429 19:25:58.194805       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 19:25:58.197294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.965304ms"
	I0429 19:25:58.210327       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-431000" podCIDRs=["10.244.0.0/24"]
	I0429 19:25:58.243038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.71687ms"
	I0429 19:25:58.317120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.53619ms"
	I0429 19:25:58.317171       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.981µs"
	I0429 19:25:58.580329       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:25:58.580784       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 19:25:58.589175       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:26:06.946975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.17µs"
	I0429 19:26:06.950414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.785µs"
	I0429 19:26:06.958967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.397µs"
	I0429 19:26:06.969829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.121µs"
	I0429 19:26:07.886482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.564µs"
	I0429 19:26:07.917554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.073261ms"
	I0429 19:26:07.917615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.945µs"
	I0429 19:26:07.935068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.658277ms"
	I0429 19:26:07.935131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.868µs"
	I0429 19:26:08.387295       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 19:27:20.003021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.397546ms"
	I0429 19:27:20.009762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.493528ms"
	I0429 19:27:20.018226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.419727ms"
	I0429 19:27:20.018502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.67µs"
	I0429 19:27:23.408503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.094694ms"
	I0429 19:27:23.408538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.077µs"
	
	
	==> kube-proxy [dd7e6dcf8c18] <==
	I0429 19:25:59.229445       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:25:59.241484       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.53"]
	I0429 19:25:59.271201       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:25:59.271245       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:25:59.271262       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:25:59.273894       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:25:59.274358       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:25:59.274426       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:25:59.275610       1 config.go:192] "Starting service config controller"
	I0429 19:25:59.275903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:25:59.275968       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:25:59.276082       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:25:59.276583       1 config.go:319] "Starting node config controller"
	I0429 19:25:59.277663       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:25:59.376395       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:25:59.376459       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:25:59.377863       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [395f732c4ecd] <==
	W0429 19:25:41.293554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:25:41.294406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:25:41.294738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:25:41.294772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:25:41.295245       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 19:25:41.295295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 19:25:41.295534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:25:41.295865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 19:25:41.296096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:25:41.296336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:25:41.296557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:25:41.296590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:25:41.296636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:25:41.296645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:25:41.297185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:25:41.297218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:25:41.298191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 19:25:41.298225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 19:25:42.168872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:25:42.168946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:25:42.211386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:25:42.211426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:25:42.376364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:25:42.376406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0429 19:25:42.882998       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 19:34:44 ha-431000 kubelet[1997]: E0429 19:34:44.685510    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:34:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:34:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:34:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:34:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:35:44 ha-431000 kubelet[1997]: E0429 19:35:44.685249    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:35:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:35:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:35:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:35:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:36:44 ha-431000 kubelet[1997]: E0429 19:36:44.684785    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:36:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:36:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:36:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:36:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:37:44 ha-431000 kubelet[1997]: E0429 19:37:44.685252    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:37:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:37:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:37:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:37:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:38:44 ha-431000 kubelet[1997]: E0429 19:38:44.684405    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:38:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:38:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:38:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:38:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	
	
	==> storage-provisioner [2fe73ca28b69] <==
	I0429 19:26:07.672573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 19:26:07.699495       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 19:26:07.699524       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 19:26:07.714130       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 19:26:07.729973       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-431000_95565a70-289a-4179-a0be-cfc0aea5e428!
	I0429 19:26:07.727595       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"198bfd0e-cd93-4191-9956-e0c2576aab68", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-431000_95565a70-289a-4179-a0be-cfc0aea5e428 became leader
	I0429 19:26:07.830551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-431000_95565a70-289a-4179-a0be-cfc0aea5e428!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-fwxpx busybox-fc5497c4f-lrnxn
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/DeployApp]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-431000 describe pod busybox-fc5497c4f-fwxpx busybox-fc5497c4f-lrnxn
helpers_test.go:282: (dbg) kubectl --context ha-431000 describe pod busybox-fc5497c4f-fwxpx busybox-fc5497c4f-lrnxn:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-fwxpx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4bjrr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-4bjrr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  94s (x4 over 11m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-lrnxn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z7jzf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-z7jzf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  94s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/DeployApp FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/DeployApp (720.00s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (3.71s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-fwxpx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-fwxpx -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (130.333748ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-fwxpx does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-fwxpx could not resolve 'host.minikube.internal': exit status 1
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-hb957 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-hb957 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-lrnxn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:207: (dbg) Non-zero exit: out/minikube-darwin-amd64 kubectl -p ha-431000 -- exec busybox-fc5497c4f-lrnxn -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3": exit status 1 (129.495562ms)

                                                
                                                
** stderr ** 
	Error from server (BadRequest): pod busybox-fc5497c4f-lrnxn does not have a host assigned

                                                
                                                
** /stderr **
ha_test.go:209: Pod busybox-fc5497c4f-lrnxn could not resolve 'host.minikube.internal': exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.369613759s)
helpers_test.go:252: TestMultiControlPlane/serial/PingHostFromPods logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:38 PDT | 29 Apr 24 12:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:38 PDT | 29 Apr 24 12:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:25:07
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:25:07.932524   17722 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:25:07.932931   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.932939   17722 out.go:304] Setting ErrFile to fd 2...
	I0429 12:25:07.932945   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.933219   17722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:25:07.935193   17722 out.go:298] Setting JSON to false
	I0429 12:25:07.966866   17722 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10477,"bootTime":1714408230,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 12:25:07.967005   17722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:25:07.990175   17722 out.go:177] * [ha-431000] minikube v1.33.0 on Darwin 14.4.1
	I0429 12:25:08.073413   17722 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 12:25:08.048486   17722 notify.go:220] Checking for updates...
	I0429 12:25:08.160225   17722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:08.223714   17722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 12:25:08.245577   17722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:25:08.266426   17722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.296555   17722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:25:08.318007   17722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:25:08.348195   17722 out.go:177] * Using the hyperkit driver based on user configuration
	I0429 12:25:08.391518   17722 start.go:297] selected driver: hyperkit
	I0429 12:25:08.391552   17722 start.go:901] validating driver "hyperkit" against <nil>
	I0429 12:25:08.391574   17722 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:25:08.395979   17722 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.396124   17722 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18774-15855/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0429 12:25:08.404644   17722 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0429 12:25:08.408717   17722 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.408755   17722 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0429 12:25:08.408783   17722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:25:08.409041   17722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:25:08.409091   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:08.409101   17722 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 12:25:08.409113   17722 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:25:08.409179   17722 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:08.409294   17722 iso.go:125] acquiring lock: {Name:mk81eeca500d1c63233c315450a8b707db3bb810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.451413   17722 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0429 12:25:08.472458   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:08.472552   17722 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 12:25:08.472575   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:08.472808   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:08.472832   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:08.473307   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:08.473347   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json: {Name:mk35a218377c5d37033d174261b78cb0d686fe80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:08.474156   17722 start.go:360] acquireMachinesLock for ha-431000: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:08.474283   17722 start.go:364] duration metric: took 104.507µs to acquireMachinesLock for "ha-431000"
	I0429 12:25:08.474332   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:08.474420   17722 start.go:125] createHost starting for "" (driver="hyperkit")
	I0429 12:25:08.516479   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:08.516777   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.516851   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:08.526733   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61961
	I0429 12:25:08.527153   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:08.527581   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:08.527592   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:08.527830   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:08.527946   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:08.528043   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:08.528148   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:08.528173   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:08.528213   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:08.528263   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528280   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528335   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:08.528371   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528388   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528404   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:08.528414   17722 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0429 12:25:08.528488   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.528641   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:08.529147   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:08.529155   17722 main.go:141] libmachine: (ha-431000) Calling .Create
	I0429 12:25:08.529220   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.529331   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.529213   17730 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.529403   17722 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:08.704997   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.704900   17730 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa...
	I0429 12:25:08.942793   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.942700   17730 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0429 12:25:08.942814   17722 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0429 12:25:08.942822   17722 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0429 12:25:08.943194   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.943155   17730 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000 ...
	I0429 12:25:09.300525   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.300543   17722 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid
	I0429 12:25:09.300607   17722 main.go:141] libmachine: (ha-431000) DBG | Using UUID 24bdec35-56f8-46dd-965c-91e20eb4b3f3
	I0429 12:25:09.409075   17722 main.go:141] libmachine: (ha-431000) DBG | Generated MAC 12:79:5a:6e:b2:5a
	I0429 12:25:09.409094   17722 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:09.409125   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409150   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409187   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "24bdec35-56f8-46dd-965c-91e20eb4b3f3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:09.409213   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 24bdec35-56f8-46dd-965c-91e20eb4b3f3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:09.409223   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:09.412103   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Pid is 17733
	I0429 12:25:09.412560   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0429 12:25:09.412590   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.412663   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:09.413636   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:09.413744   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:09.413764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:09.413783   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:09.413800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:09.413811   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:09.413824   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:09.413850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:09.413862   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:09.413875   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:09.413883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:09.413914   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:09.413927   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:09.413957   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:09.413970   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:09.413986   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:09.413992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:09.414001   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:09.414010   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:09.414024   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:09.414041   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:09.414056   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:09.414068   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:09.414082   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:09.414098   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:09.414114   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:09.414144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:09.414157   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:09.414169   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:09.414178   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:09.414187   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:09.414196   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:09.414206   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:09.414215   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:09.414231   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:09.414248   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:09.414272   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:09.414288   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:09.414300   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:09.414315   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:09.414327   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:09.414339   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:09.414350   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:09.414363   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:09.414382   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:09.414397   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:09.414414   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:09.414428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:09.414441   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:09.414455   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:09.414467   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:09.414490   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:09.414509   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:09.419450   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:09.470495   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:09.471088   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.471110   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.471118   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.471128   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.850480   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:09.850496   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:09.966742   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.966763   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.966770   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.966776   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.967629   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:09.967639   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:25:11.414538   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0429 12:25:11.414553   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:11.414604   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:11.415399   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:11.415486   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:11.415495   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:11.415504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:11.415511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:11.415518   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:11.415539   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:11.415546   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:11.415584   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:11.415593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:11.415606   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:11.415614   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:11.415624   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:11.415633   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:11.415641   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:11.415647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:11.415654   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:11.415660   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:11.415667   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:11.415676   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:11.415682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:11.415689   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:11.415695   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:11.415702   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:11.415709   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:11.415714   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:11.415722   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:11.415729   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:11.415749   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:11.415755   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:11.415763   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:11.415771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:11.415782   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:11.415790   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:11.415798   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:11.415804   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:11.415809   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:11.415816   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:11.415822   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:11.415828   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:11.415836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:11.415844   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:11.415850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:11.415858   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:11.415864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:11.415872   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:11.415879   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:11.415886   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:11.415893   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:11.415898   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:11.415910   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:11.415924   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:11.415936   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:13.416614   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0429 12:25:13.416628   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:13.416689   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:13.417651   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:13.417739   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:13.417747   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:13.417765   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:13.417774   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:13.417791   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:13.417803   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:13.417810   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:13.417818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:13.417823   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:13.417830   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:13.417836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:13.417843   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:13.417849   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:13.417861   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:13.417870   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:13.417883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:13.417895   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:13.417902   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:13.417909   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:13.417922   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:13.417935   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:13.417949   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:13.417955   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:13.417969   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:13.417980   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:13.417992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:13.418008   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:13.418016   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:13.418023   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:13.418031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:13.418038   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:13.418044   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:13.418052   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:13.418059   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:13.418065   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:13.418083   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:13.418093   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:13.418101   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:13.418108   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:13.418116   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:13.418123   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:13.418130   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:13.418137   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:13.418144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:13.418151   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:13.418163   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:13.418181   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:13.418190   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:13.418198   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:13.418205   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:13.418212   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:13.418220   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:15.211585   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0429 12:25:15.211693   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0429 12:25:15.211704   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0429 12:25:15.236755   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0429 12:25:15.420156   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0429 12:25:15.420179   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:15.420320   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:15.421755   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:15.421973   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:15.421993   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:15.422007   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:15.422103   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:15.422117   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:15.422133   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:15.422156   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:15.422173   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:15.422185   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:15.422194   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:15.422211   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:15.422225   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:15.422235   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:15.422244   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:15.422263   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:15.422280   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:15.422297   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:15.422313   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:15.422336   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:15.422353   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:15.422369   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:15.422380   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:15.422393   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:15.422404   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:15.422412   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:15.422424   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:15.422432   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:15.422442   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:15.422456   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:15.422472   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:15.422492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:15.422511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:15.422535   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:15.422552   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:15.422563   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:15.422571   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:15.422581   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:15.422599   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:15.422609   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:15.422620   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:15.422628   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:15.422639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:15.422670   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:15.422682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:15.422700   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:15.422711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:15.422721   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:15.422732   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:15.422754   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:15.422771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:15.422797   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:15.422818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:17.422424   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0429 12:25:17.422440   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:17.422505   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:17.423336   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:17.423416   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:17.423428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:17.423437   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:17.423446   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:17.423453   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:17.423470   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:17.423477   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:17.423484   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:17.423492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:17.423498   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:17.423504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:17.423514   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:17.423519   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:17.423532   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:17.423542   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:17.423561   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:17.423570   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:17.423577   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:17.423593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:17.423600   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:17.423608   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:17.423615   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:17.423621   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:17.423627   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:17.423634   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:17.423639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:17.423647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:17.423653   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:17.423666   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:17.423674   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:17.423681   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:17.423696   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:17.423704   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:17.423711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:17.423718   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:17.423726   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:17.423733   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:17.423741   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:17.423750   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:17.423758   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:17.423764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:17.423770   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:17.423780   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:17.423792   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:17.423800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:17.423808   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:17.423815   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:17.423821   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:17.423838   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:17.423848   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:17.423855   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:17.423864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:19.424012   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0429 12:25:19.424040   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.424118   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.424931   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:19.425019   17722 main.go:141] libmachine: (ha-431000) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:19.425031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:19.425039   17722 main.go:141] libmachine: (ha-431000) DBG | Found match: 12:79:5a:6e:b2:5a
	I0429 12:25:19.425043   17722 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.53
	I0429 12:25:19.425100   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:19.425750   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425854   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425968   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:25:19.425977   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:19.426076   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.426132   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.426957   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:25:19.426968   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:25:19.426974   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:25:19.426979   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:19.427082   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:19.427181   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427279   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427395   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:19.427528   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:19.427739   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:19.427747   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:25:20.480764   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.480777   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:25:20.480782   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.480910   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.481005   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481084   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481169   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.481302   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.481440   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.481448   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:25:20.531217   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:25:20.531271   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:25:20.531277   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:25:20.531282   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531410   17722 buildroot.go:166] provisioning hostname "ha-431000"
	I0429 12:25:20.531421   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531516   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.531611   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.531708   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531810   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531912   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.532028   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.532159   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.532169   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0429 12:25:20.591606   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0429 12:25:20.591626   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.591764   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.591878   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.591967   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.592068   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.592207   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.592350   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.592361   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:25:20.648101   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.648121   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:25:20.648135   17722 buildroot.go:174] setting up certificates
	I0429 12:25:20.648144   17722 provision.go:84] configureAuth start
	I0429 12:25:20.648150   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.648287   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:20.648383   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.648467   17722 provision.go:143] copyHostCerts
	I0429 12:25:20.648496   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648561   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:25:20.648570   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648715   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:25:20.648916   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.648956   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:25:20.648961   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.649049   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:25:20.649197   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649236   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:25:20.649241   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649330   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:25:20.649473   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.53 ha-431000 localhost minikube]
	I0429 12:25:20.983440   17722 provision.go:177] copyRemoteCerts
	I0429 12:25:20.983549   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:25:20.983566   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.983728   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.983833   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.983973   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.984120   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:21.014934   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:25:21.015004   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:25:21.034390   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:25:21.034453   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 12:25:21.055272   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:25:21.055342   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:25:21.074968   17722 provision.go:87] duration metric: took 426.81697ms to configureAuth
	I0429 12:25:21.074981   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:25:21.075129   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:21.075141   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:21.075289   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.075381   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.075460   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075546   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075623   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.075741   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.075864   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.075872   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:25:21.124122   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:25:21.124135   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:25:21.124211   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:25:21.124224   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.124366   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.124464   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124564   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124654   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.124800   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.124944   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.124987   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:25:21.192987   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:25:21.193010   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.193155   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.193259   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193352   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193439   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.193576   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.193725   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.193737   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:25:22.738902   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:25:22.738918   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:25:22.738926   17722 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0429 12:25:22.739069   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:25:22.739077   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:25:22.739081   17722 client.go:171] duration metric: took 14.211068638s to LocalClient.Create
	I0429 12:25:22.739093   17722 start.go:167] duration metric: took 14.211111126s to libmachine.API.Create "ha-431000"
	I0429 12:25:22.739099   17722 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0429 12:25:22.739106   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:25:22.739115   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.739266   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:25:22.739278   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.739375   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.739470   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.739572   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.739667   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.770970   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:25:22.774135   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:25:22.774148   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:25:22.774248   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:25:22.774464   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:25:22.774471   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:25:22.774683   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:25:22.781634   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:22.801443   17722 start.go:296] duration metric: took 62.337848ms for postStartSetup
	I0429 12:25:22.801469   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:22.802046   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.802212   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:22.802533   17722 start.go:128] duration metric: took 14.328253816s to createHost
	I0429 12:25:22.802553   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.802653   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.802747   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802827   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802911   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.803023   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:22.803149   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:22.803156   17722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:25:22.853273   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418723.058219257
	
	I0429 12:25:22.853285   17722 fix.go:216] guest clock: 1714418723.058219257
	I0429 12:25:22.853294   17722 fix.go:229] Guest: 2024-04-29 12:25:23.058219257 -0700 PDT Remote: 2024-04-29 12:25:22.80254 -0700 PDT m=+14.919884293 (delta=255.679257ms)
	I0429 12:25:22.853306   17722 fix.go:200] guest clock delta is within tolerance: 255.679257ms
	I0429 12:25:22.853309   17722 start.go:83] releasing machines lock for "ha-431000", held for 14.379182495s
	I0429 12:25:22.853329   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853469   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.853555   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853914   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854029   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854103   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:25:22.854130   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854195   17722 ssh_runner.go:195] Run: cat /version.json
	I0429 12:25:22.854205   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854229   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854325   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854339   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854430   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854457   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854550   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.854559   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854669   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.880227   17722 ssh_runner.go:195] Run: systemctl --version
	I0429 12:25:22.884904   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:25:23.083357   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:25:23.083479   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:25:23.099007   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:25:23.099018   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.099124   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.114251   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:25:23.124072   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:25:23.133303   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.133353   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:25:23.142358   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.151274   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:25:23.160219   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.169150   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:25:23.178221   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:25:23.187855   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:25:23.200187   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:25:23.215042   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:25:23.229984   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:25:23.239505   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.358187   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:25:23.377452   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.377536   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:25:23.390115   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.402150   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:25:23.418123   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.430203   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.441283   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:25:23.508378   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.519755   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.536893   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:25:23.539939   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:25:23.548090   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:25:23.561479   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:25:23.659092   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:25:23.770465   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.770541   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:25:23.785669   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.879272   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:26.127907   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.248642004s)
	I0429 12:25:26.127971   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 12:25:26.139541   17722 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0429 12:25:26.153363   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.163941   17722 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 12:25:26.260524   17722 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 12:25:26.359754   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.473092   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 12:25:26.486634   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.497861   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.593975   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 12:25:26.650473   17722 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 12:25:26.650551   17722 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 12:25:26.655442   17722 start.go:562] Will wait 60s for crictl version
	I0429 12:25:26.655497   17722 ssh_runner.go:195] Run: which crictl
	I0429 12:25:26.658642   17722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:25:26.687124   17722 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 12:25:26.687194   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.703960   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.743708   17722 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 12:25:26.743769   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:26.744198   17722 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0429 12:25:26.748889   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:26.759848   17722 kubeadm.go:877] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:25:26.759910   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:26.759963   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:26.778002   17722 docker.go:685] Got preloaded images: 
	I0429 12:25:26.778015   17722 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 12:25:26.778063   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:26.792529   17722 ssh_runner.go:195] Run: which lz4
	I0429 12:25:26.795405   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 12:25:26.795520   17722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 12:25:26.798617   17722 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:25:26.798635   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 12:25:27.765888   17722 docker.go:649] duration metric: took 970.420369ms to copy over tarball
	I0429 12:25:27.765960   17722 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:25:30.567839   17722 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801895501s)
	I0429 12:25:30.567854   17722 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:25:30.595797   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:30.604930   17722 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 12:25:30.618739   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:30.724313   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:32.998664   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.274358101s)
	I0429 12:25:32.998761   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:33.010141   17722 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 12:25:33.010164   17722 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:25:33.010175   17722 kubeadm.go:928] updating node { 192.169.0.53 8443 v1.30.0 docker true true} ...
	I0429 12:25:33.010258   17722 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:25:33.010327   17722 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 12:25:33.028263   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:33.028276   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:33.028288   17722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:25:33.028307   17722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.53 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:25:33.028401   17722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:25:33.028424   17722 kube-vip.go:115] generating kube-vip config ...
	I0429 12:25:33.028473   17722 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:25:33.041730   17722 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:25:33.041824   17722 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:25:33.041883   17722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:25:33.049658   17722 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:25:33.049723   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 12:25:33.057238   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 12:25:33.070533   17722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:25:33.084130   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 12:25:33.098615   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0429 12:25:33.112003   17722 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:25:33.115112   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:33.124648   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:33.220871   17722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:25:33.235322   17722 certs.go:68] Setting up /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000 for IP: 192.169.0.53
	I0429 12:25:33.235337   17722 certs.go:194] generating shared ca certs ...
	I0429 12:25:33.235350   17722 certs.go:226] acquiring lock for ca certs: {Name:mk18b421bd78dac9f18e716e2a7f75e3dd2eab02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.235540   17722 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key
	I0429 12:25:33.235613   17722 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key
	I0429 12:25:33.235624   17722 certs.go:256] generating profile certs ...
	I0429 12:25:33.235671   17722 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key
	I0429 12:25:33.235685   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0429 12:25:33.339009   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt ...
	I0429 12:25:33.339025   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt: {Name:mkb1110c9ecfb391fd480dbb43740124cada1f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339372   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key ...
	I0429 12:25:33.339380   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key: {Name:mkd77fb4d6c3d30f48c31a52650895ec059a5da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339601   17722 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9
	I0429 12:25:33.339619   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.53 192.169.0.254]
	I0429 12:25:33.423656   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 ...
	I0429 12:25:33.423669   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9: {Name:mk814e943097b224c38c531359e60f0e011c83a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.454432   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 ...
	I0429 12:25:33.454489   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9: {Name:mkf400b8f0ffe83cfed5eaf2cbd6dc5c6a80d173 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.455578   17722 certs.go:381] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt
	I0429 12:25:33.455838   17722 certs.go:385] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key
	I0429 12:25:33.456077   17722 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key
	I0429 12:25:33.456094   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0429 12:25:33.747139   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0429 12:25:33.747160   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mk16d91ff8fca22baa28a0d26524b0eab36376d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747480   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key ...
	I0429 12:25:33.747491   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key: {Name:mk54d72d806a4c855d8c1362244d32326e1573f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747702   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:25:33.747734   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:25:33.747753   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:25:33.747772   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:25:33.747790   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:25:33.747808   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:25:33.747826   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:25:33.747844   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:25:33.747942   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem (1338 bytes)
	W0429 12:25:33.747991   17722 certs.go:480] ignoring /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0429 12:25:33.747999   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 12:25:33.748029   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem (1082 bytes)
	I0429 12:25:33.748058   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:25:33.748086   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem (1675 bytes)
	I0429 12:25:33.748159   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:33.748192   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:33.748211   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem -> /usr/share/ca-certificates/16301.pem
	I0429 12:25:33.748229   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /usr/share/ca-certificates/163012.pem
	I0429 12:25:33.748622   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:25:33.769199   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 12:25:33.789288   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:25:33.809631   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:25:33.829028   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 12:25:33.848280   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:25:33.867787   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:25:33.886985   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:25:33.908803   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:25:33.936146   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0429 12:25:33.955589   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0429 12:25:33.976759   17722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:25:33.990981   17722 ssh_runner.go:195] Run: openssl version
	I0429 12:25:33.995277   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0429 12:25:34.003661   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007097   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 19:19 /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007137   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.011417   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:25:34.019724   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:25:34.027921   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031378   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031414   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.035640   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:25:34.044197   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0429 12:25:34.052743   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056299   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 19:19 /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056347   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.060683   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0429 12:25:34.069054   17722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:25:34.072258   17722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:25:34.072301   17722 kubeadm.go:391] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:34.072393   17722 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 12:25:34.083740   17722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:25:34.091194   17722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:25:34.098727   17722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:25:34.106172   17722 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:25:34.106181   17722 kubeadm.go:156] found existing configuration files:
	
	I0429 12:25:34.106220   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:25:34.113138   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:25:34.113180   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:25:34.120473   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:25:34.127438   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:25:34.127477   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:25:34.134756   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.141743   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:25:34.141780   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.149046   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:25:34.156264   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:25:34.156318   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:25:34.163835   17722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:25:34.237910   17722 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:25:34.237955   17722 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:25:34.323023   17722 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:25:34.323108   17722 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:25:34.323194   17722 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:25:34.504766   17722 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:25:34.536903   17722 out.go:204]   - Generating certificates and keys ...
	I0429 12:25:34.536959   17722 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:25:34.537016   17722 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:25:34.879309   17722 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:25:34.943072   17722 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:25:35.013417   17722 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:25:35.182918   17722 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:25:35.261483   17722 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:25:35.261590   17722 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.419895   17722 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:25:35.420010   17722 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.479554   17722 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:25:35.607018   17722 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:25:35.903671   17722 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:25:35.904031   17722 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:25:36.142024   17722 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:25:36.288455   17722 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:25:36.556621   17722 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:25:36.634522   17722 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:25:36.899304   17722 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:25:36.899717   17722 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:25:36.902090   17722 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:25:36.923478   17722 out.go:204]   - Booting up control plane ...
	I0429 12:25:36.923546   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:25:36.923600   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:25:36.923664   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:25:36.923740   17722 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:25:36.923808   17722 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:25:36.923851   17722 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:25:37.025954   17722 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:25:37.026032   17722 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:25:38.035770   17722 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.010128476s
	I0429 12:25:38.035905   17722 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:25:43.764081   17722 kubeadm.go:309] [api-check] The API server is healthy after 5.731795501s
	I0429 12:25:43.772560   17722 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:25:43.789869   17722 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:25:43.805688   17722 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:25:43.805836   17722 kubeadm.go:309] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:25:43.811946   17722 kubeadm.go:309] [bootstrap-token] Using token: m6uvir.d4h53m26b1o2593q
	I0429 12:25:43.849413   17722 out.go:204]   - Configuring RBAC rules ...
	I0429 12:25:43.849574   17722 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:25:43.852729   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:25:43.892587   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:25:43.894879   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:25:43.896753   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:25:43.898705   17722 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:25:44.168425   17722 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:25:44.581231   17722 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:25:45.167548   17722 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:25:45.168144   17722 kubeadm.go:309] 
	I0429 12:25:45.168196   17722 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:25:45.168206   17722 kubeadm.go:309] 
	I0429 12:25:45.168263   17722 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:25:45.168268   17722 kubeadm.go:309] 
	I0429 12:25:45.168291   17722 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:25:45.168346   17722 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:25:45.168395   17722 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:25:45.168404   17722 kubeadm.go:309] 
	I0429 12:25:45.168448   17722 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:25:45.168454   17722 kubeadm.go:309] 
	I0429 12:25:45.168486   17722 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:25:45.168492   17722 kubeadm.go:309] 
	I0429 12:25:45.168531   17722 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:25:45.168592   17722 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:25:45.168639   17722 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:25:45.168643   17722 kubeadm.go:309] 
	I0429 12:25:45.168723   17722 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:25:45.168792   17722 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:25:45.168798   17722 kubeadm.go:309] 
	I0429 12:25:45.168870   17722 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.168957   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 \
	I0429 12:25:45.168980   17722 kubeadm.go:309] 	--control-plane 
	I0429 12:25:45.168984   17722 kubeadm.go:309] 
	I0429 12:25:45.169055   17722 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:25:45.169064   17722 kubeadm.go:309] 
	I0429 12:25:45.169140   17722 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.169225   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 
	I0429 12:25:45.169327   17722 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:25:45.169339   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:45.169343   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:45.192418   17722 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 12:25:45.250402   17722 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 12:25:45.254278   17722 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 12:25:45.254288   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 12:25:45.273385   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 12:25:45.493620   17722 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:25:45.493687   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.493694   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_04_29T12_25_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0429 12:25:45.612967   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.644835   17722 ops.go:34] apiserver oom_adj: -16
	I0429 12:25:46.113178   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:46.613113   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.114140   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.613848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.113291   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.613593   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.113154   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.613813   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.113663   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.613520   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.113510   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.613048   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.113848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.614240   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.113945   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.615040   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.113136   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.614312   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.113432   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.614732   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.113281   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.613014   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.114177   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.613453   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.726340   17722 kubeadm.go:1107] duration metric: took 12.232843816s to wait for elevateKubeSystemPrivileges
	W0429 12:25:57.726368   17722 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:25:57.726373   17722 kubeadm.go:393] duration metric: took 23.654349307s to StartCluster
	I0429 12:25:57.726388   17722 settings.go:142] acquiring lock: {Name:mk4619a2e41059be70c5979af83844e654a88068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.726480   17722 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.726968   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/kubeconfig: {Name:mkea524d41d79b030c40cdcb50fe58e264b35b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.727247   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:25:57.727263   17722 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:57.727278   17722 start.go:240] waiting for startup goroutines ...
	I0429 12:25:57.727289   17722 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:25:57.727325   17722 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0429 12:25:57.727331   17722 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0429 12:25:57.727351   17722 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0429 12:25:57.727356   17722 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0429 12:25:57.727369   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.727424   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:57.727601   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727602   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727616   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.727617   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.736886   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61984
	I0429 12:25:57.737295   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.737678   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.737690   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.737981   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61986
	I0429 12:25:57.738074   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.738331   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.738533   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.738579   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.738658   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.738671   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.739264   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.739478   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.739646   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.739702   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.741554   17722 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.741775   17722 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdd3cee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:25:57.742264   17722 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 12:25:57.742421   17722 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0429 12:25:57.742443   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.742660   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.742677   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.747696   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61988
	I0429 12:25:57.748147   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.748482   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.748498   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.748745   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.748874   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.748975   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.749041   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.750035   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.788044   17722 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:25:57.751748   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61990
	I0429 12:25:57.825563   17722 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.825579   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:25:57.825595   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.825776   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.825980   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.826026   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.826183   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.826334   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.826500   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.826516   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.826839   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.827237   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.827262   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.835538   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:25:57.836072   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61993
	I0429 12:25:57.836372   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.836712   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.836731   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.836941   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.837051   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.837149   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.837235   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.838190   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.838373   17722 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:57.838382   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:25:57.838392   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.838527   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.838615   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.838701   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.838792   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.921992   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.963950   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:58.239892   17722 start.go:946] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0429 12:25:58.350022   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350036   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350091   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350105   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350223   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350224   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350239   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350250   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350258   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350278   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350288   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350302   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350302   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350309   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350428   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350439   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350445   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350487   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350497   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350532   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350553   17722 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 12:25:58.350558   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.350566   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.350571   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.356148   17722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:25:58.356671   17722 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 12:25:58.356684   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.356690   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.356695   17722 round_trippers.go:473]     Content-Type: application/json
	I0429 12:25:58.356699   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.359632   17722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:25:58.359767   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.359777   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.359915   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.359935   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.359942   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.383304   17722 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 12:25:58.425404   17722 addons.go:505] duration metric: took 698.121896ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 12:25:58.425460   17722 start.go:245] waiting for cluster config update ...
	I0429 12:25:58.425487   17722 start.go:254] writing updated cluster config ...
	I0429 12:25:58.541992   17722 out.go:177] 
	I0429 12:25:58.562923   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:58.563008   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.583885   17722 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0429 12:25:58.624732   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:58.624767   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:58.624960   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:58.624976   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:58.625077   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.625802   17722 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:58.625882   17722 start.go:364] duration metric: took 63.962µs to acquireMachinesLock for "ha-431000-m02"
	I0429 12:25:58.625908   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:58.625983   17722 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0429 12:25:58.647279   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:58.647410   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:58.647441   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:58.657607   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61997
	I0429 12:25:58.657937   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:58.658289   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:58.658303   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:58.658522   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:58.658625   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:25:58.658714   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:25:58.658812   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:58.658829   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:58.658858   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:58.658895   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658904   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658941   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:58.658966   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658976   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658987   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:58.658993   17722 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0429 12:25:58.659060   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.659082   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:25:58.684517   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:58.684548   17722 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0429 12:25:58.684719   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.685030   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.684708   17751 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:58.685090   17722 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:58.880461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.880334   17751 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa...
	I0429 12:25:58.972148   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972072   17751 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0429 12:25:58.972171   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0429 12:25:58.972194   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0429 12:25:58.972426   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972393   17751 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02 ...
	I0429 12:25:59.330149   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.330167   17722 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0429 12:25:59.330184   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID 9365fc26-c2e1-4d51-b9c4-d17102d62f4c
	I0429 12:25:59.355313   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC ca:19:33:23:af:66
	I0429 12:25:59.355328   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:59.355356   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355382   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355419   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9365fc26-c2e1-4d51-b9c4-d17102d62f4c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/
machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:59.355452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9365fc26-c2e1-4d51-b9c4-d17102d62f4c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:59.355465   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:59.358434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Pid is 17752
	I0429 12:25:59.359551   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0429 12:25:59.359562   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.359638   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:25:59.360574   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:25:59.360681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:59.360697   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:59.360720   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:59.360739   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:59.360752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:59.360762   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:59.360776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:59.360790   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:59.360800   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:59.360809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:59.360829   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:59.360841   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:59.360856   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:59.360868   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:59.360879   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:59.360894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:59.360903   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:59.360932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:59.360940   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:59.360949   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:59.360958   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:59.360966   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:59.360973   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:59.360981   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:59.360987   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:59.360995   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:59.361001   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:59.361007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:59.361029   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:59.361043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:59.361055   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:59.361062   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:59.361069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:59.361077   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:59.361085   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:59.361094   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:59.361102   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:59.361111   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:59.361122   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:59.361132   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:59.361139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:59.361146   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:59.361155   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:59.361168   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:59.361180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:59.361200   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:59.361215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:59.361230   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:59.361243   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:59.361254   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:59.361267   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:59.361281   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:59.361294   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:59.365988   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:59.374154   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:59.374983   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.375007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.375049   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.375071   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.757805   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:59.757822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:59.872427   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.872442   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.872451   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.872459   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.873348   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:59.873361   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:26:01.362461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0429 12:26:01.362478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:01.362589   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:01.363375   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:01.363463   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:01.363472   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:01.363482   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:01.363495   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:01.363505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:01.363523   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:01.363540   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:01.363555   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:01.363564   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:01.363570   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:01.363580   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:01.363585   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:01.363594   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:01.363601   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:01.363606   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:01.363615   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:01.363620   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:01.363627   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:01.363641   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:01.363649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:01.363656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:01.363664   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:01.363671   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:01.363678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:01.363685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:01.363691   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:01.363698   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:01.363706   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:01.363715   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:01.363729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:01.363745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:01.363752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:01.363758   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:01.363763   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:01.363770   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:01.363779   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:01.363786   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:01.363791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:01.363806   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:01.363819   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:01.363827   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:01.363833   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:01.363839   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:01.363847   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:01.363854   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:01.363862   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:01.363876   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:01.363887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:01.363897   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:01.363909   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:01.363929   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:01.363942   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:01.363951   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:03.364674   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0429 12:26:03.364689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:03.364761   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:03.365538   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:03.365618   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:03.365628   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:03.365649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:03.365656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:03.365663   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:03.365668   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:03.365681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:03.365689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:03.365701   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:03.365714   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:03.365721   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:03.365734   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:03.365745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:03.365754   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:03.365760   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:03.365766   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:03.365776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:03.365783   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:03.365789   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:03.365795   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:03.365802   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:03.365809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:03.365820   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:03.365828   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:03.365835   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:03.365840   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:03.365850   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:03.365857   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:03.365871   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:03.365880   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:03.365887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:03.365894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:03.365906   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:03.365922   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:03.365930   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:03.365939   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:03.365955   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:03.365964   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:03.365971   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:03.365979   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:03.365986   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:03.365993   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:03.366000   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:03.366008   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:03.366015   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:03.366022   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:03.366028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:03.366035   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:03.366043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:03.366050   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:03.366067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:03.366079   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:03.366088   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:05.177799   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0429 12:26:05.177845   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0429 12:26:05.177853   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0429 12:26:05.201640   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0429 12:26:05.367080   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0429 12:26:05.367108   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:05.367266   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:05.368694   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:05.368836   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:05.368855   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:05.368872   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:05.368902   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:05.368917   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:05.368932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:05.368944   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:05.368959   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:05.368998   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:05.369028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:05.369054   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:05.369067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:05.369081   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:05.369089   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:05.369107   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:05.369129   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:05.369139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:05.369150   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:05.369170   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:05.369187   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:05.369198   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:05.369209   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:05.369224   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:05.369237   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:05.369248   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:05.369260   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:05.369269   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:05.369280   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:05.369290   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:05.369300   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:05.369309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:05.369317   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:05.369342   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:05.369360   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:05.369371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:05.369383   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:05.369391   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:05.369402   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:05.369412   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:05.369420   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:05.369429   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:05.369437   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:05.369446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:05.369456   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:05.369467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:05.369476   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:05.369491   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:05.369505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:05.369516   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:05.369528   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:05.369539   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:05.369549   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:05.369556   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:07.369177   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0429 12:26:07.369192   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:07.369292   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:07.370069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:07.370166   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:07.370180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:07.370193   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:07.370204   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:07.370215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:07.370228   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:07.370247   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:07.370259   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:07.370274   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:07.370287   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:07.370296   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:07.370304   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:07.370309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:07.370315   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:07.370344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:07.370354   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:07.370363   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:07.370371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:07.370384   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:07.370397   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:07.370403   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:07.370411   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:07.370418   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:07.370423   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:07.370438   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:07.370446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:07.370452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:07.370467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:07.370478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:07.370508   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:07.370534   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:07.370550   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:07.370567   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:07.370582   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:07.370595   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:07.370607   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:07.370629   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:07.370639   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:07.370646   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:07.370659   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:07.370669   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:07.370678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:07.370685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:07.370695   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:07.370729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:07.370740   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:07.370745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:07.370768   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:07.370775   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:07.370785   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:07.370791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:07.370807   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:07.370822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:09.370416   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0429 12:26:09.370434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.370529   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.371299   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:09.371370   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0429 12:26:09.371380   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:ca:19:33:23:af:66 ID:1,ca:19:33:23:af:66 Lease:0x663145d0}
	I0429 12:26:09.371387   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: ca:19:33:23:af:66
	I0429 12:26:09.371394   17722 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.54
	I0429 12:26:09.371443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:09.371998   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372094   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372184   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:26:09.372192   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0429 12:26:09.372282   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.372344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.373081   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:26:09.373088   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:26:09.373091   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:26:09.373096   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:09.373185   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:09.373265   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373344   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373427   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:09.373540   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:09.373715   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:09.373722   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:26:09.398851   17722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0429 12:26:12.461040   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.461055   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:26:12.461061   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.461182   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.461292   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461377   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461470   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.461607   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.461746   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.461754   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:26:12.523397   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:26:12.523438   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:26:12.523445   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:26:12.523450   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523578   17722 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0429 12:26:12.523587   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523662   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.523743   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.523829   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.523933   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.524027   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.524160   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.524319   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.524328   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0429 12:26:12.595067   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0429 12:26:12.595081   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.595219   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.595311   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595396   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595487   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.595609   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.595751   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.595763   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:26:12.665199   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.665226   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:26:12.665245   17722 buildroot.go:174] setting up certificates
	I0429 12:26:12.665255   17722 provision.go:84] configureAuth start
	I0429 12:26:12.665262   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.665398   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:12.665483   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.665568   17722 provision.go:143] copyHostCerts
	I0429 12:26:12.665595   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665640   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:26:12.665646   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665779   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:26:12.665976   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666005   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:26:12.666009   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666094   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:26:12.666237   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666265   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:26:12.666269   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666334   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:26:12.666486   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.54 ha-431000-m02 localhost minikube]
	I0429 12:26:12.788915   17722 provision.go:177] copyRemoteCerts
	I0429 12:26:12.788965   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:26:12.788978   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.789131   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.789238   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.789318   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.789407   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:12.827773   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:26:12.827848   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:26:12.847569   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:26:12.847650   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:26:12.867106   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:26:12.867164   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:26:12.886794   17722 provision.go:87] duration metric: took 221.533222ms to configureAuth
	I0429 12:26:12.886806   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:26:12.886945   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:26:12.886958   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:12.887088   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.887189   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.887288   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887445   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.887559   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.887688   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.887695   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:26:12.949449   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:26:12.960507   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:26:12.960630   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:26:12.960650   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.960811   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.960918   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961031   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961144   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.961314   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.961496   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.961559   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.53"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:26:13.034441   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.53
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:26:13.034459   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:13.034584   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:13.034670   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034771   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034868   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:13.034990   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:13.035131   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:13.035143   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:26:14.527695   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:26:14.527713   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:26:14.527719   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0429 12:26:14.527859   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:26:14.527867   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:26:14.527872   17722 client.go:171] duration metric: took 15.869221462s to LocalClient.Create
	I0429 12:26:14.527883   17722 start.go:167] duration metric: took 15.869255705s to libmachine.API.Create "ha-431000"
	I0429 12:26:14.527888   17722 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0429 12:26:14.527895   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:26:14.527905   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.528059   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:26:14.528083   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.528190   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.528277   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.528371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.528462   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.571874   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:26:14.575715   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:26:14.575725   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:26:14.575820   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:26:14.575979   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:26:14.575985   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:26:14.576144   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:26:14.585193   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:26:14.617800   17722 start.go:296] duration metric: took 89.904272ms for postStartSetup
	I0429 12:26:14.617830   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:14.618443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.618595   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:26:14.619362   17722 start.go:128] duration metric: took 15.993555623s to createHost
	I0429 12:26:14.619378   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.619471   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.619554   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619637   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619749   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.619852   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:14.619986   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:14.619993   17722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:26:14.684608   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418773.901729701
	
	I0429 12:26:14.684618   17722 fix.go:216] guest clock: 1714418773.901729701
	I0429 12:26:14.684638   17722 fix.go:229] Guest: 2024-04-29 12:26:13.901729701 -0700 PDT Remote: 2024-04-29 12:26:14.619372 -0700 PDT m=+66.737316646 (delta=-717.642299ms)
	I0429 12:26:14.684650   17722 fix.go:200] guest clock delta is within tolerance: -717.642299ms
	I0429 12:26:14.684654   17722 start.go:83] releasing machines lock for "ha-431000-m02", held for 16.058950363s
	I0429 12:26:14.684671   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.684806   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.706148   17722 out.go:177] * Found network options:
	I0429 12:26:14.727219   17722 out.go:177]   - NO_PROXY=192.169.0.53
	W0429 12:26:14.749052   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.749119   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.749950   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750211   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750321   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:26:14.750357   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0429 12:26:14.750432   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.750551   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:26:14.750572   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.750573   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750783   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750814   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751054   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751092   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751333   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.751365   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751506   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0429 12:26:14.787217   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:26:14.787277   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:26:14.848201   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:26:14.848227   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:14.848358   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:14.865009   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:26:14.874483   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:26:14.883536   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:26:14.883585   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:26:14.892833   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.901853   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:26:14.910642   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.919451   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:26:14.928628   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:26:14.937504   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:26:14.946508   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:26:14.955539   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:26:14.963731   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:26:14.971817   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.075369   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:26:15.095140   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:15.095227   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:26:15.114227   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.131639   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:26:15.152206   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.163545   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.173829   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:26:15.196676   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.207120   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:15.222241   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:26:15.225235   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:26:15.232398   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:26:15.245911   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:26:15.342741   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:26:15.443507   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:26:15.443528   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:26:15.456917   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.554009   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:27:16.592140   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.038821319s)
	I0429 12:27:16.592205   17722 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 12:27:16.627966   17722 out.go:177] 
	W0429 12:27:16.650769   17722 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 19:26:12 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.562442558Z" level=info msg="Starting up"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.563076350Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.567928747Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=527
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.583689216Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597320910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597367066Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597410988Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597446593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597506031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597540413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597805772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597892122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597933427Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597944227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598009110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598199858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599844343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599884447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599970866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600003444Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600098323Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600162313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600172836Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602841911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602899413Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602913390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602923442Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602932201Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603001707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603176643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603272136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603306202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603317268Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603331817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603343600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603351894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603363377Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603373358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603384153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603392284Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603399425Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603411790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603424697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603433133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603441352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603448955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603456927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603465082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603473249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603488023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603499629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603507180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603514591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603523215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603533846Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603547413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603558028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603566468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603593185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603603060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603610561Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603617284Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603705135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603740552Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603750151Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603879153Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603936791Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603967392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603999139Z" level=info msg="containerd successfully booted in 0.021874s"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.594997677Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.598794560Z" level=info msg="Loading containers: start."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.706853403Z" level=info msg="Loading containers: done."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713747539Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713886174Z" level=info msg="Daemon has completed initialization"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.742027280Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 19:26:13 ha-431000-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.744424276Z" level=info msg="API listen on [::]:2376"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.784502897Z" level=info msg="Processing signal 'terminated'"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785439200Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785771501Z" level=info msg="Daemon shutdown complete"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785832350Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785935215Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 19:26:14 ha-431000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 dockerd[864]: time="2024-04-29T19:26:15.839252866Z" level=info msg="Starting up"
	Apr 29 19:27:16 ha-431000-m02 dockerd[864]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 12:27:16.650883   17722 out.go:239] * 
	W0429 12:27:16.652130   17722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 12:27:16.735182   17722 out.go:177] 
	
	
	==> Docker <==
	Apr 29 19:27:18 ha-431000 dockerd[1113]: 2024/04/29 19:27:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:18 ha-431000 dockerd[1113]: 2024/04/29 19:27:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:18 ha-431000 dockerd[1113]: 2024/04/29 19:27:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:19 ha-431000 dockerd[1113]: 2024/04/29 19:27:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:19 ha-431000 dockerd[1113]: 2024/04/29 19:27:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:19 ha-431000 dockerd[1113]: 2024/04/29 19:27:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:27:21 ha-431000 dockerd[1119]: time="2024-04-29T19:27:21.826378264Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:27:21 ha-431000 dockerd[1119]: time="2024-04-29T19:27:21.826530023Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:27:21 ha-431000 dockerd[1119]: time="2024-04-29T19:27:21.826603491Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:27:21 ha-431000 dockerd[1119]: time="2024-04-29T19:27:21.826775463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:27:21 ha-431000 cri-dockerd[1017]: time="2024-04-29T19:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a77f2c4e9a00b659bd23a71b2e06a6597386312621d0ef4b80802241e48f11f3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 19:27:23 ha-431000 cri-dockerd[1017]: time="2024-04-29T19:27:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.131163360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.131274726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.131594744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.132740553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:39:17 ha-431000 dockerd[1113]: 2024/04/29 19:39:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:19 ha-431000 dockerd[1113]: 2024/04/29 19:39:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:19 ha-431000 dockerd[1113]: 2024/04/29 19:39:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:19 ha-431000 dockerd[1113]: 2024/04/29 19:39:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d991ad36f678a       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   11 minutes ago      Running             busybox                   0                   a77f2c4e9a00b       busybox-fc5497c4f-hb957
	1eccc3b061df0       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   1d309f3bad709       coredns-7db6d8ff4d-42q4w
	5fc9c9934ce6d       cbb01a7bd410d                                                                                         13 minutes ago      Running             coredns                   0                   b653ad03a91d0       coredns-7db6d8ff4d-vpt8j
	2fe73ca28b695       6e38f40d628db                                                                                         13 minutes ago      Running             storage-provisioner       0                   5910bbb306534       storage-provisioner
	2fc7f6cf26c66       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              13 minutes ago      Running             kindnet-cni               0                   77287d05e6e30       kindnet-7v2x2
	dd7e6dcf8c180       a0bf559e280cf                                                                                         13 minutes ago      Running             kube-proxy                0                   68ae02827d8f8       kube-proxy-24zxg
	25e21b3064d0f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     13 minutes ago      Running             kube-vip                  0                   eb02781b4cb9e       kube-vip-ha-431000
	253cb761306be       c42f13656d0b2                                                                                         13 minutes ago      Running             kube-apiserver            0                   0b29b8a237387       kube-apiserver-ha-431000
	8689b2595f852       c7aad43836fa5                                                                                         13 minutes ago      Running             kube-controller-manager   0                   9353c1ab6af44       kube-controller-manager-ha-431000
	395f732c4ecd8       259c8277fcbbc                                                                                         13 minutes ago      Running             kube-scheduler            0                   b5d69e14edefa       kube-scheduler-ha-431000
	f0af9c768f874       3861cfcd7c04c                                                                                         13 minutes ago      Running             etcd                      0                   d3b2bec8afdb2       etcd-ha-431000
	
	
	==> coredns [1eccc3b061df] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59271 - 28720 "HINFO IN 7673301142508958601.5024328155936603833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011541653s
	[INFO] 10.244.0.4:50854 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.054331652s
	[INFO] 10.244.0.4:59171 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.013535598s
	[INFO] 10.244.0.4:59678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205886s
	[INFO] 10.244.0.4:40138 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017345083s
	[INFO] 10.244.0.4:49479 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01082099s
	[INFO] 10.244.0.4:35756 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000317739s
	[INFO] 10.244.0.4:51111 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143818s
	[INFO] 10.244.0.4:50301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055524s
	[INFO] 10.244.0.4:35729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076289s
	[INFO] 10.244.0.4:33262 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000085602s
	
	
	==> coredns [5fc9c9934ce6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46682 - 61825 "HINFO IN 2083561514213073366.6289946638194081086. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012255792s
	[INFO] 10.244.0.4:44627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165757s
	[INFO] 10.244.0.4:54619 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.011057673s
	[INFO] 10.244.0.4:40906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000248422s
	[INFO] 10.244.0.4:38496 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162208s
	[INFO] 10.244.0.4:49726 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162889s
	[INFO] 10.244.0.4:53369 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076076s
	[INFO] 10.244.0.4:38793 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118909s
	[INFO] 10.244.0.4:38671 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049671s
	[INFO] 10.244.0.4:35662 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117262s
	[INFO] 10.244.0.4:56289 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000077023s
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_25_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:25:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:39:21 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:26:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.53
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 0347e3fb0470420c86cb19dc5e37eb8c
	  System UUID:                24bd46dd-0000-0000-965c-91e20eb4b3f3
	  Boot ID:                    5a51c8ef-7deb-418c-a18b-22ccf6b75cab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hb957              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-42q4w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-7db6d8ff4d-vpt8j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-431000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-7v2x2                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-431000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-24zxg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-431000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-431000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 13m   kube-proxy       
	  Normal  Starting                 13m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  13m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  13m   kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m   kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m   kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           13m   node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeReady                13m   kubelet          Node ha-431000 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.683718] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.250560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.108677] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +0.110997] systemd-fstab-generator[512]: Ignoring "noauto" option for root device
	[  +1.889303] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +0.059318] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.256507] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.100729] systemd-fstab-generator[782]: Ignoring "noauto" option for root device
	[  +0.117080] systemd-fstab-generator[796]: Ignoring "noauto" option for root device
	[  +2.384127] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	[  +0.098660] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.104412] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.132603] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +4.124699] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.056838] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.446594] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +3.800237] systemd-fstab-generator[1492]: Ignoring "noauto" option for root device
	[  +0.050917] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.212746] systemd-fstab-generator[1990]: Ignoring "noauto" option for root device
	[  +0.076591] kauditd_printk_skb: 72 callbacks suppressed
	[ +14.491572] kauditd_printk_skb: 12 callbacks suppressed
	[Apr29 19:26] kauditd_printk_skb: 38 callbacks suppressed
	[Apr29 19:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f0af9c768f87] <==
	{"level":"info","ts":"2024-04-29T19:25:39.452287Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9dc59dab1a1aba0","initial-advertise-peer-urls":["https://192.169.0.53:2380"],"listen-peer-urls":["https://192.169.0.53:2380"],"advertise-client-urls":["https://192.169.0.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:25:39.452539Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:25:39.452814Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.53:2380"}
	{"level":"info","ts":"2024-04-29T19:25:39.4529Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.53:2380"}
	{"level":"info","ts":"2024-04-29T19:25:40.22976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.229854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.22989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 received MsgPreVoteResp from 9dc59dab1a1aba0 at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.229907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.22992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 received MsgVoteResp from 9dc59dab1a1aba0 at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.229934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became leader at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.229975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc59dab1a1aba0 elected leader 9dc59dab1a1aba0 at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.232269Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9dc59dab1a1aba0","local-member-attributes":"{Name:ha-431000 ClientURLs:[https://192.169.0.53:2379]}","request-path":"/0/members/9dc59dab1a1aba0/attributes","cluster-id":"ab7009b4269bb52","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:25:40.232472Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:25:40.232823Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.233029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:25:40.233086Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:25:40.236119Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T19:25:40.237354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:25:40.237915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.53:2379"}
	{"level":"info","ts":"2024-04-29T19:25:40.284926Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ab7009b4269bb52","local-member-id":"9dc59dab1a1aba0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.285192Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.285242Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:35:40.294897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":967}
	{"level":"info","ts":"2024-04-29T19:35:40.297588Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":967,"took":"2.224451ms","hash":4246457979,"current-db-size-bytes":2424832,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2424832,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-29T19:35:40.297735Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4246457979,"revision":967,"compact-revision":-1}
	
	
	==> kernel <==
	 19:39:22 up 14 min,  0 users,  load average: 0.31, 0.17, 0.10
	Linux ha-431000 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2fc7f6cf26c6] <==
	I0429 19:37:13.068667       1 main.go:227] handling current node
	I0429 19:37:23.075833       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:37:23.075864       1 main.go:227] handling current node
	I0429 19:37:33.083407       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:37:33.083494       1 main.go:227] handling current node
	I0429 19:37:43.087668       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:37:43.087827       1 main.go:227] handling current node
	I0429 19:37:53.092400       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:37:53.092637       1 main.go:227] handling current node
	I0429 19:38:03.096843       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:03.096888       1 main.go:227] handling current node
	I0429 19:38:13.103825       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:13.103856       1 main.go:227] handling current node
	I0429 19:38:23.113180       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:23.113226       1 main.go:227] handling current node
	I0429 19:38:33.117375       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:33.117662       1 main.go:227] handling current node
	I0429 19:38:43.123849       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:43.124059       1 main.go:227] handling current node
	I0429 19:38:53.135472       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:53.135690       1 main.go:227] handling current node
	I0429 19:39:03.141037       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:03.141082       1 main.go:227] handling current node
	I0429 19:39:13.153671       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:13.153913       1 main.go:227] handling current node
	
	
	==> kube-apiserver [253cb761306b] <==
	I0429 19:25:41.298355       1 controller.go:615] quota admission added evaluator for: namespaces
	I0429 19:25:41.298726       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:25:41.298734       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:25:41.298739       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:25:41.298742       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:25:41.341235       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:25:42.199810       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 19:25:42.203107       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 19:25:42.203598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:25:42.475929       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:25:42.501043       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:25:42.531435       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 19:25:42.536355       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.53]
	I0429 19:25:42.537008       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 19:25:42.539635       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 19:25:43.279622       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:25:44.788367       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:25:44.794388       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 19:25:44.803386       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:25:58.079997       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 19:25:58.240228       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 19:39:15.945064       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62063: use of closed network connection
	E0429 19:39:16.427565       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62071: use of closed network connection
	E0429 19:39:16.885095       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62079: use of closed network connection
	E0429 19:39:20.456182       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62115: use of closed network connection
	
	
	==> kube-controller-manager [8689b2595f85] <==
	I0429 19:25:58.194805       1 shared_informer.go:320] Caches are synced for persistent volume
	I0429 19:25:58.197294       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="113.965304ms"
	I0429 19:25:58.210327       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-431000" podCIDRs=["10.244.0.0/24"]
	I0429 19:25:58.243038       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="45.71687ms"
	I0429 19:25:58.317120       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="73.53619ms"
	I0429 19:25:58.317171       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="31.981µs"
	I0429 19:25:58.580329       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:25:58.580784       1 garbagecollector.go:157] "All resource monitors have synced. Proceeding to collect garbage" logger="garbage-collector-controller"
	I0429 19:25:58.589175       1 shared_informer.go:320] Caches are synced for garbage collector
	I0429 19:26:06.946975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.17µs"
	I0429 19:26:06.950414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.785µs"
	I0429 19:26:06.958967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.397µs"
	I0429 19:26:06.969829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.121µs"
	I0429 19:26:07.886482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.564µs"
	I0429 19:26:07.917554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.073261ms"
	I0429 19:26:07.917615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.945µs"
	I0429 19:26:07.935068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.658277ms"
	I0429 19:26:07.935131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.868µs"
	I0429 19:26:08.387295       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 19:27:20.003021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.397546ms"
	I0429 19:27:20.009762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.493528ms"
	I0429 19:27:20.018226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.419727ms"
	I0429 19:27:20.018502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.67µs"
	I0429 19:27:23.408503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.094694ms"
	I0429 19:27:23.408538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.077µs"
	
	
	==> kube-proxy [dd7e6dcf8c18] <==
	I0429 19:25:59.229445       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:25:59.241484       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.53"]
	I0429 19:25:59.271201       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:25:59.271245       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:25:59.271262       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:25:59.273894       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:25:59.274358       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:25:59.274426       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:25:59.275610       1 config.go:192] "Starting service config controller"
	I0429 19:25:59.275903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:25:59.275968       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:25:59.276082       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:25:59.276583       1 config.go:319] "Starting node config controller"
	I0429 19:25:59.277663       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:25:59.376395       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:25:59.376459       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:25:59.377863       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [395f732c4ecd] <==
	W0429 19:25:41.293554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:25:41.294406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:25:41.294738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:25:41.294772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:25:41.295245       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 19:25:41.295295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 19:25:41.295534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:25:41.295865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 19:25:41.296096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:25:41.296336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:25:41.296557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:25:41.296590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:25:41.296636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:25:41.296645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:25:41.297185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:25:41.297218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:25:41.298191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 19:25:41.298225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 19:25:42.168872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:25:42.168946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:25:42.211386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:25:42.211426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:25:42.376364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:25:42.376406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0429 19:25:42.882998       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 19:34:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:34:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:34:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:34:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:35:44 ha-431000 kubelet[1997]: E0429 19:35:44.685249    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:35:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:35:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:35:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:35:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:36:44 ha-431000 kubelet[1997]: E0429 19:36:44.684785    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:36:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:36:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:36:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:36:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:37:44 ha-431000 kubelet[1997]: E0429 19:37:44.685252    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:37:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:37:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:37:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:37:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:38:44 ha-431000 kubelet[1997]: E0429 19:38:44.684405    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:38:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:38:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:38:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:38:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:39:20 ha-431000 kubelet[1997]: E0429 19:39:20.456491    1997 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55736->127.0.0.1:43599: write tcp 127.0.0.1:55736->127.0.0.1:43599: write: broken pipe
	
	
	==> storage-provisioner [2fe73ca28b69] <==
	I0429 19:26:07.672573       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0429 19:26:07.699495       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0429 19:26:07.699524       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0429 19:26:07.714130       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0429 19:26:07.729973       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_ha-431000_95565a70-289a-4179-a0be-cfc0aea5e428!
	I0429 19:26:07.727595       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"198bfd0e-cd93-4191-9956-e0c2576aab68", APIVersion:"v1", ResourceVersion:"426", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' ha-431000_95565a70-289a-4179-a0be-cfc0aea5e428 became leader
	I0429 19:26:07.830551       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_ha-431000_95565a70-289a-4179-a0be-cfc0aea5e428!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-fwxpx busybox-fc5497c4f-lrnxn
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/PingHostFromPods]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-431000 describe pod busybox-fc5497c4f-fwxpx busybox-fc5497c4f-lrnxn
helpers_test.go:282: (dbg) kubectl --context ha-431000 describe pod busybox-fc5497c4f-fwxpx busybox-fc5497c4f-lrnxn:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-fwxpx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4bjrr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-4bjrr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  98s (x4 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	
	
	Name:             busybox-fc5497c4f-lrnxn
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-z7jzf (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-z7jzf:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                From               Message
	  ----     ------            ----               ----               -------
	  Warning  FailedScheduling  98s (x3 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/PingHostFromPods FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/PingHostFromPods (3.71s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (50.67s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-431000 -v=7 --alsologtostderr
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-431000 -v=7 --alsologtostderr: (47.51939492s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
ha_test.go:234: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 2 (347.484302ms)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Running
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-431000-m03
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:40:10.881734   18133 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:40:10.881932   18133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:40:10.881937   18133 out.go:304] Setting ErrFile to fd 2...
	I0429 12:40:10.881941   18133 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:40:10.882130   18133 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:40:10.882335   18133 out.go:298] Setting JSON to false
	I0429 12:40:10.882362   18133 mustload.go:65] Loading cluster: ha-431000
	I0429 12:40:10.882402   18133 notify.go:220] Checking for updates...
	I0429 12:40:10.883685   18133 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:40:10.883708   18133 status.go:255] checking status of ha-431000 ...
	I0429 12:40:10.884053   18133 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:40:10.884093   18133 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:40:10.892855   18133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62211
	I0429 12:40:10.893184   18133 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:10.893599   18133 main.go:141] libmachine: Using API Version  1
	I0429 12:40:10.893608   18133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:10.893823   18133 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:10.893923   18133 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:40:10.894017   18133 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:40:10.894135   18133 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:40:10.895081   18133 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0429 12:40:10.895104   18133 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:40:10.895365   18133 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:40:10.895389   18133 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:40:10.903734   18133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62213
	I0429 12:40:10.904088   18133 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:10.904415   18133 main.go:141] libmachine: Using API Version  1
	I0429 12:40:10.904423   18133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:10.904650   18133 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:10.904753   18133 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:40:10.904836   18133 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:40:10.905091   18133 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:40:10.905115   18133 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:40:10.917488   18133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62215
	I0429 12:40:10.917815   18133 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:10.918149   18133 main.go:141] libmachine: Using API Version  1
	I0429 12:40:10.918167   18133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:10.918366   18133 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:10.918469   18133 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:40:10.918592   18133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:40:10.918614   18133 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:40:10.918698   18133 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:40:10.918773   18133 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:40:10.918882   18133 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:40:10.918981   18133 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:40:10.957627   18133 ssh_runner.go:195] Run: systemctl --version
	I0429 12:40:10.964888   18133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:40:10.975863   18133 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0429 12:40:10.975889   18133 api_server.go:166] Checking apiserver status ...
	I0429 12:40:10.975925   18133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 12:40:10.988125   18133 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1875/cgroup
	W0429 12:40:10.996709   18133 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1875/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:40:10.996761   18133 ssh_runner.go:195] Run: ls
	I0429 12:40:10.999895   18133 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0429 12:40:11.002882   18133 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0429 12:40:11.002893   18133 status.go:422] ha-431000 apiserver status = Running (err=<nil>)
	I0429 12:40:11.002903   18133 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:40:11.002914   18133 status.go:255] checking status of ha-431000-m02 ...
	I0429 12:40:11.003173   18133 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:40:11.003194   18133 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:40:11.011919   18133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62219
	I0429 12:40:11.012247   18133 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:11.012561   18133 main.go:141] libmachine: Using API Version  1
	I0429 12:40:11.012574   18133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:11.012785   18133 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:11.012894   18133 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0429 12:40:11.012978   18133 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:40:11.013077   18133 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:40:11.014019   18133 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0429 12:40:11.014028   18133 host.go:66] Checking if "ha-431000-m02" exists ...
	I0429 12:40:11.014268   18133 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:40:11.014294   18133 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:40:11.022650   18133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62221
	I0429 12:40:11.022994   18133 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:11.023316   18133 main.go:141] libmachine: Using API Version  1
	I0429 12:40:11.023327   18133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:11.023556   18133 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:11.023674   18133 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:40:11.023757   18133 host.go:66] Checking if "ha-431000-m02" exists ...
	I0429 12:40:11.024005   18133 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:40:11.024036   18133 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:40:11.032439   18133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62223
	I0429 12:40:11.032760   18133 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:11.033106   18133 main.go:141] libmachine: Using API Version  1
	I0429 12:40:11.033125   18133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:11.033335   18133 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:11.033454   18133 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:40:11.033606   18133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:40:11.033626   18133 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:40:11.033719   18133 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:40:11.033802   18133 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:40:11.033876   18133 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:40:11.033955   18133 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:40:11.068431   18133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:40:11.080588   18133 kubeconfig.go:125] found "ha-431000" server: "https://192.169.0.254:8443"
	I0429 12:40:11.080603   18133 api_server.go:166] Checking apiserver status ...
	I0429 12:40:11.080647   18133 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	W0429 12:40:11.091841   18133 api_server.go:170] stopped: unable to get apiserver pid: sudo pgrep -xnf kube-apiserver.*minikube.*: Process exited with status 1
	stdout:
	
	stderr:
	I0429 12:40:11.091853   18133 status.go:422] ha-431000-m02 apiserver status = Stopped (err=<nil>)
	I0429 12:40:11.091863   18133 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Running Kubelet:Stopped APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:40:11.091874   18133 status.go:255] checking status of ha-431000-m03 ...
	I0429 12:40:11.092163   18133 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:40:11.092185   18133 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:40:11.101336   18133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62226
	I0429 12:40:11.101685   18133 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:11.102005   18133 main.go:141] libmachine: Using API Version  1
	I0429 12:40:11.102015   18133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:11.102228   18133 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:11.102346   18133 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0429 12:40:11.102426   18133 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:40:11.102509   18133 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 18112
	I0429 12:40:11.103455   18133 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0429 12:40:11.103464   18133 host.go:66] Checking if "ha-431000-m03" exists ...
	I0429 12:40:11.103704   18133 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:40:11.103731   18133 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:40:11.112312   18133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62228
	I0429 12:40:11.112644   18133 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:11.112983   18133 main.go:141] libmachine: Using API Version  1
	I0429 12:40:11.112997   18133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:11.113227   18133 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:11.113345   18133 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0429 12:40:11.113432   18133 host.go:66] Checking if "ha-431000-m03" exists ...
	I0429 12:40:11.113688   18133 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:40:11.113709   18133 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:40:11.122416   18133 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62230
	I0429 12:40:11.122769   18133 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:40:11.123095   18133 main.go:141] libmachine: Using API Version  1
	I0429 12:40:11.123106   18133 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:40:11.123305   18133 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:40:11.123414   18133 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0429 12:40:11.123528   18133 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:40:11.123539   18133 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0429 12:40:11.123611   18133 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0429 12:40:11.123691   18133 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0429 12:40:11.123776   18133 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0429 12:40:11.123854   18133 sshutil.go:53] new ssh client: &{IP:192.169.0.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	I0429 12:40:11.154880   18133 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 12:40:11.165145   18133 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:236: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 logs -n 25: (2.255909931s)
helpers_test.go:252: TestMultiControlPlane/serial/AddWorkerNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                 Args                 |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:37 PDT | 29 Apr 24 12:37 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:38 PDT | 29 Apr 24 12:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:38 PDT | 29 Apr 24 12:38 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | jsonpath='{.items[*].status.podIP}'  |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.io               |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn --           |           |         |         |                     |                     |
	|         | nslookup kubernetes.default          |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn -- nslookup  |           |         |         |                     |                     |
	|         | kubernetes.default.svc.cluster.local |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- get pods -o          | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | jsonpath='{.items[*].metadata.name}' |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-fwxpx              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:39 PDT |
	|         | busybox-fc5497c4f-hb957 -- sh        |           |         |         |                     |                     |
	|         | -c ping -c 1 192.169.0.1             |           |         |         |                     |                     |
	| kubectl | -p ha-431000 -- exec                 | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT |                     |
	|         | busybox-fc5497c4f-lrnxn              |           |         |         |                     |                     |
	|         | -- sh -c nslookup                    |           |         |         |                     |                     |
	|         | host.minikube.internal | awk         |           |         |         |                     |                     |
	|         | 'NR==5' | cut -d' ' -f3              |           |         |         |                     |                     |
	| node    | add -p ha-431000 -v=7                | ha-431000 | jenkins | v1.33.0 | 29 Apr 24 12:39 PDT | 29 Apr 24 12:40 PDT |
	|         | --alsologtostderr                    |           |         |         |                     |                     |
	|---------|--------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:25:07
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:25:07.932524   17722 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:25:07.932931   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.932939   17722 out.go:304] Setting ErrFile to fd 2...
	I0429 12:25:07.932945   17722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:25:07.933219   17722 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:25:07.935193   17722 out.go:298] Setting JSON to false
	I0429 12:25:07.966866   17722 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10477,"bootTime":1714408230,"procs":429,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 12:25:07.967005   17722 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:25:07.990175   17722 out.go:177] * [ha-431000] minikube v1.33.0 on Darwin 14.4.1
	I0429 12:25:08.073413   17722 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 12:25:08.048486   17722 notify.go:220] Checking for updates...
	I0429 12:25:08.160225   17722 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:08.223714   17722 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 12:25:08.245577   17722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:25:08.266426   17722 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.296555   17722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:25:08.318007   17722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:25:08.348195   17722 out.go:177] * Using the hyperkit driver based on user configuration
	I0429 12:25:08.391518   17722 start.go:297] selected driver: hyperkit
	I0429 12:25:08.391552   17722 start.go:901] validating driver "hyperkit" against <nil>
	I0429 12:25:08.391574   17722 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:25:08.395979   17722 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.396124   17722 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18774-15855/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0429 12:25:08.404644   17722 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0429 12:25:08.408717   17722 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.408755   17722 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0429 12:25:08.408783   17722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:25:08.409041   17722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0429 12:25:08.409091   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:08.409101   17722 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0429 12:25:08.409113   17722 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0429 12:25:08.409179   17722 start.go:340] cluster config:
	{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docke
r CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:08.409294   17722 iso.go:125] acquiring lock: {Name:mk81eeca500d1c63233c315450a8b707db3bb810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:25:08.451413   17722 out.go:177] * Starting "ha-431000" primary control-plane node in "ha-431000" cluster
	I0429 12:25:08.472458   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:08.472552   17722 preload.go:147] Found local preload: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 12:25:08.472575   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:08.472808   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:08.472832   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:08.473307   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:08.473347   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json: {Name:mk35a218377c5d37033d174261b78cb0d686fe80 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:08.474156   17722 start.go:360] acquireMachinesLock for ha-431000: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:08.474283   17722 start.go:364] duration metric: took 104.507µs to acquireMachinesLock for "ha-431000"
	I0429 12:25:08.474332   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:08.474420   17722 start.go:125] createHost starting for "" (driver="hyperkit")
	I0429 12:25:08.516479   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:08.516777   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:08.516851   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:08.526733   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61961
	I0429 12:25:08.527153   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:08.527581   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:08.527592   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:08.527830   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:08.527946   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:08.528043   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:08.528148   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:08.528173   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:08.528213   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:08.528263   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528280   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528335   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:08.528371   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:08.528388   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:08.528404   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:08.528414   17722 main.go:141] libmachine: (ha-431000) Calling .PreCreateCheck
	I0429 12:25:08.528488   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.528641   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:08.529147   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:08.529155   17722 main.go:141] libmachine: (ha-431000) Calling .Create
	I0429 12:25:08.529220   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:08.529331   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.529213   17730 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:08.529403   17722 main.go:141] libmachine: (ha-431000) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:08.704997   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.704900   17730 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa...
	I0429 12:25:08.942793   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.942700   17730 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk...
	I0429 12:25:08.942814   17722 main.go:141] libmachine: (ha-431000) DBG | Writing magic tar header
	I0429 12:25:08.942822   17722 main.go:141] libmachine: (ha-431000) DBG | Writing SSH key tar header
	I0429 12:25:08.943194   17722 main.go:141] libmachine: (ha-431000) DBG | I0429 12:25:08.943155   17730 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000 ...
	I0429 12:25:09.300525   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.300543   17722 main.go:141] libmachine: (ha-431000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid
	I0429 12:25:09.300607   17722 main.go:141] libmachine: (ha-431000) DBG | Using UUID 24bdec35-56f8-46dd-965c-91e20eb4b3f3
	I0429 12:25:09.409075   17722 main.go:141] libmachine: (ha-431000) DBG | Generated MAC 12:79:5a:6e:b2:5a
	I0429 12:25:09.409094   17722 main.go:141] libmachine: (ha-431000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:09.409125   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409150   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"24bdec35-56f8-46dd-965c-91e20eb4b3f3", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc000198630)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:09.409187   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "24bdec35-56f8-46dd-965c-91e20eb4b3f3", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyp
rintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:09.409213   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 24bdec35-56f8-46dd-965c-91e20eb4b3f3 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/ha-431000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nom
odeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:09.409223   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:09.412103   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 DEBUG: hyperkit: Pid is 17733
	I0429 12:25:09.412560   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 0
	I0429 12:25:09.412590   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:09.412663   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:09.413636   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:09.413744   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:09.413764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:09.413783   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:09.413800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:09.413811   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:09.413824   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:09.413850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:09.413862   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:09.413875   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:09.413883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:09.413914   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:09.413927   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:09.413957   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:09.413970   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:09.413986   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:09.413992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:09.414001   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:09.414010   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:09.414024   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:09.414041   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:09.414056   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:09.414068   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:09.414082   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:09.414098   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:09.414114   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:09.414144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:09.414157   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:09.414169   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:09.414178   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:09.414187   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:09.414196   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:09.414206   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:09.414215   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:09.414231   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:09.414248   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:09.414272   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:09.414288   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:09.414300   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:09.414315   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:09.414327   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:09.414339   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:09.414350   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:09.414363   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:09.414382   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:09.414397   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:09.414414   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:09.414428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:09.414441   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:09.414455   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:09.414467   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:09.414490   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:09.414509   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:09.419450   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:09.470495   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:09.471088   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.471110   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.471118   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.471128   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.850480   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:09.850496   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:09.966742   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:09.966763   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:09.966770   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:09.966776   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:09.967629   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:09.967639   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:09 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:25:11.414538   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 1
	I0429 12:25:11.414553   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:11.414604   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:11.415399   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:11.415486   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:11.415495   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:11.415504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:11.415511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:11.415518   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:11.415539   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:11.415546   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:11.415584   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:11.415593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:11.415606   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:11.415614   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:11.415624   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:11.415633   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:11.415641   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:11.415647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:11.415654   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:11.415660   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:11.415667   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:11.415676   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:11.415682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:11.415689   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:11.415695   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:11.415702   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:11.415709   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:11.415714   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:11.415722   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:11.415729   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:11.415749   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:11.415755   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:11.415763   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:11.415771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:11.415782   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:11.415790   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:11.415798   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:11.415804   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:11.415809   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:11.415816   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:11.415822   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:11.415828   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:11.415836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:11.415844   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:11.415850   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:11.415858   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:11.415864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:11.415872   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:11.415879   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:11.415886   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:11.415893   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:11.415898   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:11.415910   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:11.415924   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:11.415936   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:13.416614   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 2
	I0429 12:25:13.416628   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:13.416689   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:13.417651   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:13.417739   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:13.417747   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:13.417765   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:13.417774   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:13.417791   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:13.417803   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:13.417810   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:13.417818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:13.417823   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:13.417830   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:13.417836   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:13.417843   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:13.417849   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:13.417861   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:13.417870   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:13.417883   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:13.417895   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:13.417902   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:13.417909   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:13.417922   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:13.417935   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:13.417949   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:13.417955   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:13.417969   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:13.417980   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:13.417992   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:13.418008   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:13.418016   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:13.418023   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:13.418031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:13.418038   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:13.418044   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:13.418052   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:13.418059   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:13.418065   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:13.418083   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:13.418093   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:13.418101   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:13.418108   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:13.418116   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:13.418123   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:13.418130   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:13.418137   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:13.418144   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:13.418151   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:13.418163   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:13.418181   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:13.418190   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:13.418198   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:13.418205   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:13.418212   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:13.418220   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:15.211585   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0429 12:25:15.211693   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0429 12:25:15.211704   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0429 12:25:15.236755   17722 main.go:141] libmachine: (ha-431000) DBG | 2024/04/29 12:25:15 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0429 12:25:15.420156   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 3
	I0429 12:25:15.420179   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:15.420320   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:15.421755   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:15.421973   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:15.421993   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:15.422007   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:15.422103   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:15.422117   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:15.422133   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:15.422156   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:15.422173   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:15.422185   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:15.422194   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:15.422211   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:15.422225   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:15.422235   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:15.422244   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:15.422263   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:15.422280   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:15.422297   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:15.422313   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:15.422336   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:15.422353   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:15.422369   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:15.422380   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:15.422393   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:15.422404   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:15.422412   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:15.422424   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:15.422432   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:15.422442   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:15.422456   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:15.422472   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:15.422492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:15.422511   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:15.422535   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:15.422552   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:15.422563   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:15.422571   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:15.422581   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:15.422599   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:15.422609   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:15.422620   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:15.422628   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:15.422639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:15.422670   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:15.422682   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:15.422700   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:15.422711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:15.422721   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:15.422732   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:15.422754   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:15.422771   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:15.422797   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:15.422818   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:17.422424   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 4
	I0429 12:25:17.422440   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:17.422505   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:17.423336   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:17.423416   17722 main.go:141] libmachine: (ha-431000) DBG | Found 51 entries in /var/db/dhcpd_leases!
	I0429 12:25:17.423428   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:17.423437   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:17.423446   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:17.423453   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:17.423470   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:17.423477   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:17.423484   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:17.423492   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:17.423498   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:17.423504   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:17.423514   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:17.423519   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:17.423532   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:17.423542   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:17.423561   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:17.423570   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:17.423577   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:17.423593   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:17.423600   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:17.423608   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:17.423615   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:17.423621   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:17.423627   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:17.423634   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:17.423639   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:17.423647   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:17.423653   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:17.423666   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:17.423674   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:17.423681   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:17.423696   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:17.423704   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:17.423711   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:17.423718   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:17.423726   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:17.423733   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:17.423741   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:17.423750   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:17.423758   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:17.423764   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:17.423770   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:17.423780   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:17.423792   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:17.423800   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:17.423808   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:17.423815   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:17.423821   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:17.423838   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:17.423848   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:17.423855   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:17.423864   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:19.424012   17722 main.go:141] libmachine: (ha-431000) DBG | Attempt 5
	I0429 12:25:19.424040   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.424118   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.424931   17722 main.go:141] libmachine: (ha-431000) DBG | Searching for 12:79:5a:6e:b2:5a in /var/db/dhcpd_leases ...
	I0429 12:25:19.425019   17722 main.go:141] libmachine: (ha-431000) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:19.425031   17722 main.go:141] libmachine: (ha-431000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:19.425039   17722 main.go:141] libmachine: (ha-431000) DBG | Found match: 12:79:5a:6e:b2:5a
	I0429 12:25:19.425043   17722 main.go:141] libmachine: (ha-431000) DBG | IP: 192.169.0.53
	I0429 12:25:19.425100   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:19.425750   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425854   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:19.425968   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:25:19.425977   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:19.426076   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:19.426132   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:19.426957   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:25:19.426968   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:25:19.426974   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:25:19.426979   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:19.427082   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:19.427181   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427279   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:19.427395   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:19.427528   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:19.427739   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:19.427747   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:25:20.480764   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.480777   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:25:20.480782   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.480910   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.481005   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481084   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.481169   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.481302   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.481440   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.481448   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:25:20.531217   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:25:20.531271   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:25:20.531277   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:25:20.531282   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531410   17722 buildroot.go:166] provisioning hostname "ha-431000"
	I0429 12:25:20.531421   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.531516   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.531611   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.531708   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531810   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.531912   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.532028   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.532159   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.532169   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000 && echo "ha-431000" | sudo tee /etc/hostname
	I0429 12:25:20.591606   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000
	
	I0429 12:25:20.591626   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.591764   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.591878   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.591967   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.592068   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.592207   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:20.592350   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:20.592361   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:25:20.648101   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:25:20.648121   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:25:20.648135   17722 buildroot.go:174] setting up certificates
	I0429 12:25:20.648144   17722 provision.go:84] configureAuth start
	I0429 12:25:20.648150   17722 main.go:141] libmachine: (ha-431000) Calling .GetMachineName
	I0429 12:25:20.648287   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:20.648383   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.648467   17722 provision.go:143] copyHostCerts
	I0429 12:25:20.648496   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648561   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:25:20.648570   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:25:20.648715   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:25:20.648916   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.648956   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:25:20.648961   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:25:20.649049   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:25:20.649197   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649236   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:25:20.649241   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:25:20.649330   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:25:20.649473   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000 san=[127.0.0.1 192.169.0.53 ha-431000 localhost minikube]
	I0429 12:25:20.983440   17722 provision.go:177] copyRemoteCerts
	I0429 12:25:20.983549   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:25:20.983566   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:20.983728   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:20.983833   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:20.983973   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:20.984120   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:21.014934   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:25:21.015004   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:25:21.034390   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:25:21.034453   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1196 bytes)
	I0429 12:25:21.055272   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:25:21.055342   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0429 12:25:21.074968   17722 provision.go:87] duration metric: took 426.81697ms to configureAuth
	I0429 12:25:21.074981   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:25:21.075129   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:21.075141   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:21.075289   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.075381   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.075460   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075546   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.075623   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.075741   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.075864   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.075872   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:25:21.124122   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:25:21.124135   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:25:21.124211   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:25:21.124224   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.124366   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.124464   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124564   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.124654   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.124800   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.124944   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.124987   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:25:21.192987   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:25:21.193010   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:21.193155   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:21.193259   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193352   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:21.193439   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:21.193576   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:21.193725   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:21.193737   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:25:22.738902   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:25:22.738918   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:25:22.738926   17722 main.go:141] libmachine: (ha-431000) Calling .GetURL
	I0429 12:25:22.739069   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:25:22.739077   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:25:22.739081   17722 client.go:171] duration metric: took 14.211068638s to LocalClient.Create
	I0429 12:25:22.739093   17722 start.go:167] duration metric: took 14.211111126s to libmachine.API.Create "ha-431000"
	I0429 12:25:22.739099   17722 start.go:293] postStartSetup for "ha-431000" (driver="hyperkit")
	I0429 12:25:22.739106   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:25:22.739115   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.739266   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:25:22.739278   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.739375   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.739470   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.739572   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.739667   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.770970   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:25:22.774135   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:25:22.774148   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:25:22.774248   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:25:22.774464   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:25:22.774471   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:25:22.774683   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:25:22.781634   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:22.801443   17722 start.go:296] duration metric: took 62.337848ms for postStartSetup
	I0429 12:25:22.801469   17722 main.go:141] libmachine: (ha-431000) Calling .GetConfigRaw
	I0429 12:25:22.802046   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.802212   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:22.802533   17722 start.go:128] duration metric: took 14.328253816s to createHost
	I0429 12:25:22.802553   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.802653   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.802747   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802827   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.802911   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.803023   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:25:22.803149   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.53 22 <nil> <nil>}
	I0429 12:25:22.803156   17722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:25:22.853273   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418723.058219257
	
	I0429 12:25:22.853285   17722 fix.go:216] guest clock: 1714418723.058219257
	I0429 12:25:22.853294   17722 fix.go:229] Guest: 2024-04-29 12:25:23.058219257 -0700 PDT Remote: 2024-04-29 12:25:22.80254 -0700 PDT m=+14.919884293 (delta=255.679257ms)
	I0429 12:25:22.853306   17722 fix.go:200] guest clock delta is within tolerance: 255.679257ms
	I0429 12:25:22.853309   17722 start.go:83] releasing machines lock for "ha-431000", held for 14.379182495s
	I0429 12:25:22.853329   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853469   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:22.853555   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.853914   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854029   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:22.854103   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:25:22.854130   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854195   17722 ssh_runner.go:195] Run: cat /version.json
	I0429 12:25:22.854205   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:22.854229   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854325   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:22.854339   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854430   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:22.854457   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854550   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.854559   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:22.854669   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:22.880227   17722 ssh_runner.go:195] Run: systemctl --version
	I0429 12:25:22.884904   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0429 12:25:23.083357   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:25:23.083479   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:25:23.099007   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:25:23.099018   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.099124   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.114251   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:25:23.124072   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:25:23.133303   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.133353   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:25:23.142358   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.151274   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:25:23.160219   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:25:23.169150   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:25:23.178221   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:25:23.187855   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:25:23.200187   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:25:23.215042   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:25:23.229984   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:25:23.239505   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.358187   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:25:23.377452   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:25:23.377536   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:25:23.390115   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.402150   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:25:23.418123   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:25:23.430203   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.441283   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:25:23.508378   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:25:23.519755   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:25:23.536893   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:25:23.539939   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:25:23.548090   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:25:23.561479   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:25:23.659092   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:25:23.770465   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:25:23.770541   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:25:23.785669   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:23.879272   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:26.127907   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.248642004s)
	I0429 12:25:26.127971   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0429 12:25:26.139541   17722 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0429 12:25:26.153363   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.163941   17722 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0429 12:25:26.260524   17722 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0429 12:25:26.359754   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.473092   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0429 12:25:26.486634   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0429 12:25:26.497861   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:26.593975   17722 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0429 12:25:26.650473   17722 start.go:541] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0429 12:25:26.650551   17722 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0429 12:25:26.655442   17722 start.go:562] Will wait 60s for crictl version
	I0429 12:25:26.655497   17722 ssh_runner.go:195] Run: which crictl
	I0429 12:25:26.658642   17722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0429 12:25:26.687124   17722 start.go:578] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  26.0.2
	RuntimeApiVersion:  v1
	I0429 12:25:26.687194   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.703960   17722 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0429 12:25:26.743708   17722 out.go:204] * Preparing Kubernetes v1.30.0 on Docker 26.0.2 ...
	I0429 12:25:26.743769   17722 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:25:26.744198   17722 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0429 12:25:26.748889   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:26.759848   17722 kubeadm.go:877] updating cluster {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Mou
ntType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0429 12:25:26.759910   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:26.759963   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:26.778002   17722 docker.go:685] Got preloaded images: 
	I0429 12:25:26.778015   17722 docker.go:691] registry.k8s.io/kube-apiserver:v1.30.0 wasn't preloaded
	I0429 12:25:26.778063   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:26.792529   17722 ssh_runner.go:195] Run: which lz4
	I0429 12:25:26.795405   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0429 12:25:26.795520   17722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0429 12:25:26.798617   17722 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0429 12:25:26.798635   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 --> /preloaded.tar.lz4 (359556852 bytes)
	I0429 12:25:27.765888   17722 docker.go:649] duration metric: took 970.420369ms to copy over tarball
	I0429 12:25:27.765960   17722 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0429 12:25:30.567839   17722 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.801895501s)
	I0429 12:25:30.567854   17722 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0429 12:25:30.595797   17722 ssh_runner.go:195] Run: sudo cat /var/lib/docker/image/overlay2/repositories.json
	I0429 12:25:30.604930   17722 ssh_runner.go:362] scp memory --> /var/lib/docker/image/overlay2/repositories.json (2630 bytes)
	I0429 12:25:30.618739   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:30.724313   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:25:32.998664   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.274358101s)
	I0429 12:25:32.998761   17722 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0429 12:25:33.010141   17722 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.30.0
	registry.k8s.io/kube-scheduler:v1.30.0
	registry.k8s.io/kube-controller-manager:v1.30.0
	registry.k8s.io/kube-proxy:v1.30.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	
	-- /stdout --
	I0429 12:25:33.010164   17722 cache_images.go:84] Images are preloaded, skipping loading
	I0429 12:25:33.010175   17722 kubeadm.go:928] updating node { 192.169.0.53 8443 v1.30.0 docker true true} ...
	I0429 12:25:33.010258   17722 kubeadm.go:940] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-431000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.53
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0429 12:25:33.010327   17722 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0429 12:25:33.028263   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:33.028276   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:33.028288   17722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0429 12:25:33.028307   17722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.53 APIServerPort:8443 KubernetesVersion:v1.30.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-431000 NodeName:ha-431000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.53"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.53 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/ma
nifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0429 12:25:33.028401   17722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.53
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-431000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.53
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.53"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0429 12:25:33.028424   17722 kube-vip.go:115] generating kube-vip config ...
	I0429 12:25:33.028473   17722 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0429 12:25:33.041730   17722 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0429 12:25:33.041824   17722 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/super-admin.conf"
	    name: kubeconfig
	status: {}
	I0429 12:25:33.041883   17722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.0
	I0429 12:25:33.049658   17722 binaries.go:44] Found k8s binaries, skipping transfer
	I0429 12:25:33.049723   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0429 12:25:33.057238   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (308 bytes)
	I0429 12:25:33.070533   17722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0429 12:25:33.084130   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2151 bytes)
	I0429 12:25:33.098615   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1446 bytes)
	I0429 12:25:33.112003   17722 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0429 12:25:33.115112   17722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0429 12:25:33.124648   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:25:33.220871   17722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0429 12:25:33.235322   17722 certs.go:68] Setting up /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000 for IP: 192.169.0.53
	I0429 12:25:33.235337   17722 certs.go:194] generating shared ca certs ...
	I0429 12:25:33.235350   17722 certs.go:226] acquiring lock for ca certs: {Name:mk18b421bd78dac9f18e716e2a7f75e3dd2eab02 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.235540   17722 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key
	I0429 12:25:33.235613   17722 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key
	I0429 12:25:33.235624   17722 certs.go:256] generating profile certs ...
	I0429 12:25:33.235671   17722 certs.go:363] generating signed profile cert for "minikube-user": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key
	I0429 12:25:33.235685   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt with IP's: []
	I0429 12:25:33.339009   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt ...
	I0429 12:25:33.339025   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt: {Name:mkb1110c9ecfb391fd480dbb43740124cada1f52 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339372   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key ...
	I0429 12:25:33.339380   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key: {Name:mkd77fb4d6c3d30f48c31a52650895ec059a5da2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.339601   17722 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9
	I0429 12:25:33.339619   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.53 192.169.0.254]
	I0429 12:25:33.423656   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 ...
	I0429 12:25:33.423669   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9: {Name:mk814e943097b224c38c531359e60f0e011c83a9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.454432   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 ...
	I0429 12:25:33.454489   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9: {Name:mkf400b8f0ffe83cfed5eaf2cbd6dc5c6a80d173 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.455578   17722 certs.go:381] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt
	I0429 12:25:33.455838   17722 certs.go:385] copying /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key.da16b1a9 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key
	I0429 12:25:33.456077   17722 certs.go:363] generating signed profile cert for "aggregator": /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key
	I0429 12:25:33.456094   17722 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt with IP's: []
	I0429 12:25:33.747139   17722 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt ...
	I0429 12:25:33.747160   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt: {Name:mk16d91ff8fca22baa28a0d26524b0eab36376d1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747480   17722 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key ...
	I0429 12:25:33.747491   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key: {Name:mk54d72d806a4c855d8c1362244d32326e1573f0 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:33.747702   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0429 12:25:33.747734   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0429 12:25:33.747753   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0429 12:25:33.747772   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0429 12:25:33.747790   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0429 12:25:33.747808   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0429 12:25:33.747826   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0429 12:25:33.747844   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0429 12:25:33.747942   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem (1338 bytes)
	W0429 12:25:33.747991   17722 certs.go:480] ignoring /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301_empty.pem, impossibly tiny 0 bytes
	I0429 12:25:33.747999   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem (1675 bytes)
	I0429 12:25:33.748029   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem (1082 bytes)
	I0429 12:25:33.748058   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem (1123 bytes)
	I0429 12:25:33.748086   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem (1675 bytes)
	I0429 12:25:33.748159   17722 certs.go:484] found cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:25:33.748192   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:33.748211   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem -> /usr/share/ca-certificates/16301.pem
	I0429 12:25:33.748229   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /usr/share/ca-certificates/163012.pem
	I0429 12:25:33.748622   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0429 12:25:33.769199   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0429 12:25:33.789288   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0429 12:25:33.809631   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0429 12:25:33.829028   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0429 12:25:33.848280   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0429 12:25:33.867787   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0429 12:25:33.886985   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0429 12:25:33.908803   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0429 12:25:33.936146   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/16301.pem --> /usr/share/ca-certificates/16301.pem (1338 bytes)
	I0429 12:25:33.955589   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /usr/share/ca-certificates/163012.pem (1708 bytes)
	I0429 12:25:33.976759   17722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0429 12:25:33.990981   17722 ssh_runner.go:195] Run: openssl version
	I0429 12:25:33.995277   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/163012.pem && ln -fs /usr/share/ca-certificates/163012.pem /etc/ssl/certs/163012.pem"
	I0429 12:25:34.003661   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007097   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Apr 29 19:19 /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.007137   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/163012.pem
	I0429 12:25:34.011417   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/163012.pem /etc/ssl/certs/3ec20f2e.0"
	I0429 12:25:34.019724   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0429 12:25:34.027921   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031378   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Apr 29 19:11 /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.031414   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0429 12:25:34.035640   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0429 12:25:34.044197   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16301.pem && ln -fs /usr/share/ca-certificates/16301.pem /etc/ssl/certs/16301.pem"
	I0429 12:25:34.052743   17722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056299   17722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Apr 29 19:19 /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.056347   17722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16301.pem
	I0429 12:25:34.060683   17722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/16301.pem /etc/ssl/certs/51391683.0"
	I0429 12:25:34.069054   17722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0429 12:25:34.072258   17722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0429 12:25:34.072301   17722 kubeadm.go:391] StartCluster: {Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 C
lusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:25:34.072393   17722 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0429 12:25:34.083740   17722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0429 12:25:34.091194   17722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0429 12:25:34.098727   17722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0429 12:25:34.106172   17722 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0429 12:25:34.106181   17722 kubeadm.go:156] found existing configuration files:
	
	I0429 12:25:34.106220   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0429 12:25:34.113138   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0429 12:25:34.113180   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0429 12:25:34.120473   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0429 12:25:34.127438   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0429 12:25:34.127477   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0429 12:25:34.134756   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.141743   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0429 12:25:34.141780   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0429 12:25:34.149046   17722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0429 12:25:34.156264   17722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0429 12:25:34.156318   17722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0429 12:25:34.163835   17722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.30.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0429 12:25:34.237910   17722 kubeadm.go:309] [init] Using Kubernetes version: v1.30.0
	I0429 12:25:34.237955   17722 kubeadm.go:309] [preflight] Running pre-flight checks
	I0429 12:25:34.323023   17722 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0429 12:25:34.323108   17722 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0429 12:25:34.323194   17722 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0429 12:25:34.504766   17722 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0429 12:25:34.536903   17722 out.go:204]   - Generating certificates and keys ...
	I0429 12:25:34.536959   17722 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0429 12:25:34.537016   17722 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0429 12:25:34.879309   17722 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0429 12:25:34.943072   17722 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0429 12:25:35.013417   17722 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0429 12:25:35.182918   17722 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0429 12:25:35.261483   17722 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0429 12:25:35.261590   17722 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.419895   17722 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0429 12:25:35.420010   17722 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-431000 localhost] and IPs [192.169.0.53 127.0.0.1 ::1]
	I0429 12:25:35.479554   17722 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0429 12:25:35.607018   17722 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0429 12:25:35.903671   17722 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0429 12:25:35.904031   17722 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0429 12:25:36.142024   17722 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0429 12:25:36.288455   17722 kubeadm.go:309] [kubeconfig] Writing "super-admin.conf" kubeconfig file
	I0429 12:25:36.556621   17722 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0429 12:25:36.634522   17722 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0429 12:25:36.899304   17722 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0429 12:25:36.899717   17722 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0429 12:25:36.902090   17722 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0429 12:25:36.923478   17722 out.go:204]   - Booting up control plane ...
	I0429 12:25:36.923546   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0429 12:25:36.923600   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0429 12:25:36.923664   17722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0429 12:25:36.923740   17722 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0429 12:25:36.923808   17722 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0429 12:25:36.923851   17722 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0429 12:25:37.025954   17722 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests"
	I0429 12:25:37.026032   17722 kubeadm.go:309] [kubelet-check] Waiting for a healthy kubelet. This can take up to 4m0s
	I0429 12:25:38.035770   17722 kubeadm.go:309] [kubelet-check] The kubelet is healthy after 1.010128476s
	I0429 12:25:38.035905   17722 kubeadm.go:309] [api-check] Waiting for a healthy API server. This can take up to 4m0s
	I0429 12:25:43.764081   17722 kubeadm.go:309] [api-check] The API server is healthy after 5.731795501s
	I0429 12:25:43.772560   17722 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0429 12:25:43.789869   17722 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0429 12:25:43.805688   17722 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0429 12:25:43.805836   17722 kubeadm.go:309] [mark-control-plane] Marking the node ha-431000 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0429 12:25:43.811946   17722 kubeadm.go:309] [bootstrap-token] Using token: m6uvir.d4h53m26b1o2593q
	I0429 12:25:43.849413   17722 out.go:204]   - Configuring RBAC rules ...
	I0429 12:25:43.849574   17722 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0429 12:25:43.852729   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0429 12:25:43.892587   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0429 12:25:43.894879   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0429 12:25:43.896753   17722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0429 12:25:43.898705   17722 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0429 12:25:44.168425   17722 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0429 12:25:44.581231   17722 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0429 12:25:45.167548   17722 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0429 12:25:45.168144   17722 kubeadm.go:309] 
	I0429 12:25:45.168196   17722 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0429 12:25:45.168206   17722 kubeadm.go:309] 
	I0429 12:25:45.168263   17722 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0429 12:25:45.168268   17722 kubeadm.go:309] 
	I0429 12:25:45.168291   17722 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0429 12:25:45.168346   17722 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0429 12:25:45.168395   17722 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0429 12:25:45.168404   17722 kubeadm.go:309] 
	I0429 12:25:45.168448   17722 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0429 12:25:45.168454   17722 kubeadm.go:309] 
	I0429 12:25:45.168486   17722 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0429 12:25:45.168492   17722 kubeadm.go:309] 
	I0429 12:25:45.168531   17722 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0429 12:25:45.168592   17722 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0429 12:25:45.168639   17722 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0429 12:25:45.168643   17722 kubeadm.go:309] 
	I0429 12:25:45.168723   17722 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0429 12:25:45.168792   17722 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0429 12:25:45.168798   17722 kubeadm.go:309] 
	I0429 12:25:45.168870   17722 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.168957   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 \
	I0429 12:25:45.168980   17722 kubeadm.go:309] 	--control-plane 
	I0429 12:25:45.168984   17722 kubeadm.go:309] 
	I0429 12:25:45.169055   17722 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0429 12:25:45.169064   17722 kubeadm.go:309] 
	I0429 12:25:45.169140   17722 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token m6uvir.d4h53m26b1o2593q \
	I0429 12:25:45.169225   17722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:536cc8a1d30a1fe87d4b4651015a78bced2496ff2cc6e72fc88d924b8d819ff9 
	I0429 12:25:45.169327   17722 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0429 12:25:45.169339   17722 cni.go:84] Creating CNI manager for ""
	I0429 12:25:45.169343   17722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0429 12:25:45.192418   17722 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0429 12:25:45.250402   17722 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0429 12:25:45.254278   17722 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.30.0/kubectl ...
	I0429 12:25:45.254288   17722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0429 12:25:45.273385   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0429 12:25:45.493620   17722 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0429 12:25:45.493687   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.493694   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-431000 minikube.k8s.io/updated_at=2024_04_29T12_25_45_0700 minikube.k8s.io/version=v1.33.0 minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e minikube.k8s.io/name=ha-431000 minikube.k8s.io/primary=true
	I0429 12:25:45.612967   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:45.644835   17722 ops.go:34] apiserver oom_adj: -16
	I0429 12:25:46.113178   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:46.613113   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.114140   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:47.613848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.113291   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:48.613593   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.113154   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:49.613813   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.113663   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:50.613520   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.113510   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:51.613048   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.113848   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:52.614240   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.113945   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:53.615040   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.113136   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:54.614312   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.113432   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:55.614732   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.113281   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:56.613014   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.114177   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.613453   17722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.30.0/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0429 12:25:57.726340   17722 kubeadm.go:1107] duration metric: took 12.232843816s to wait for elevateKubeSystemPrivileges
	W0429 12:25:57.726368   17722 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0429 12:25:57.726373   17722 kubeadm.go:393] duration metric: took 23.654349307s to StartCluster
	I0429 12:25:57.726388   17722 settings.go:142] acquiring lock: {Name:mk4619a2e41059be70c5979af83844e654a88068 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.726480   17722 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.726968   17722 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/kubeconfig: {Name:mkea524d41d79b030c40cdcb50fe58e264b35b6f Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:25:57.727247   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0429 12:25:57.727263   17722 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:57.727278   17722 start.go:240] waiting for startup goroutines ...
	I0429 12:25:57.727289   17722 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0429 12:25:57.727325   17722 addons.go:69] Setting storage-provisioner=true in profile "ha-431000"
	I0429 12:25:57.727331   17722 addons.go:69] Setting default-storageclass=true in profile "ha-431000"
	I0429 12:25:57.727351   17722 addons.go:234] Setting addon storage-provisioner=true in "ha-431000"
	I0429 12:25:57.727356   17722 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-431000"
	I0429 12:25:57.727369   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.727424   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:57.727601   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727602   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.727616   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.727617   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.736886   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61984
	I0429 12:25:57.737295   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.737678   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.737690   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.737981   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61986
	I0429 12:25:57.738074   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.738331   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.738533   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.738579   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.738658   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.738671   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.739264   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.739478   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.739646   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.739702   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.741554   17722 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:25:57.741775   17722 kapi.go:59] client config for ha-431000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/client.key", CAFile:"/Users/jenkins/minikube-integration/18774-15855/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(ni
l)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xdd3cee0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0429 12:25:57.742264   17722 cert_rotation.go:137] Starting client certificate rotation controller
	I0429 12:25:57.742421   17722 addons.go:234] Setting addon default-storageclass=true in "ha-431000"
	I0429 12:25:57.742443   17722 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:25:57.742660   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.742677   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.747696   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61988
	I0429 12:25:57.748147   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.748482   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.748498   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.748745   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.748874   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.748975   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.749041   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.750035   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.788044   17722 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0429 12:25:57.751748   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61990
	I0429 12:25:57.825563   17722 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.825579   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0429 12:25:57.825595   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.825776   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.825980   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.826026   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.826183   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.826334   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.826500   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.826516   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.826839   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.827237   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:57.827262   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:57.835538   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.169.0.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.30.0/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0429 12:25:57.836072   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61993
	I0429 12:25:57.836372   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:57.836712   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:57.836731   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:57.836941   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:57.837051   17722 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:25:57.837149   17722 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:57.837235   17722 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:25:57.838190   17722 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:25:57.838373   17722 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:57.838382   17722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0429 12:25:57.838392   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:25:57.838527   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:25:57.838615   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:25:57.838701   17722 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:25:57.838792   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	I0429 12:25:57.921992   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0429 12:25:57.963950   17722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.30.0/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0429 12:25:58.239892   17722 start.go:946] {"host.minikube.internal": 192.169.0.1} host record injected into CoreDNS's ConfigMap
	I0429 12:25:58.350022   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350036   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350091   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350105   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350223   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350224   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350239   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350250   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350258   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350278   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350288   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350302   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.350302   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350309   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.350428   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350439   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350445   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350487   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.350497   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.350532   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.350553   17722 round_trippers.go:463] GET https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0429 12:25:58.350558   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.350566   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.350571   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.356148   17722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0429 12:25:58.356671   17722 round_trippers.go:463] PUT https://192.169.0.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0429 12:25:58.356684   17722 round_trippers.go:469] Request Headers:
	I0429 12:25:58.356690   17722 round_trippers.go:473]     Accept: application/json, */*
	I0429 12:25:58.356695   17722 round_trippers.go:473]     Content-Type: application/json
	I0429 12:25:58.356699   17722 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0429 12:25:58.359632   17722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0429 12:25:58.359767   17722 main.go:141] libmachine: Making call to close driver server
	I0429 12:25:58.359777   17722 main.go:141] libmachine: (ha-431000) Calling .Close
	I0429 12:25:58.359915   17722 main.go:141] libmachine: (ha-431000) DBG | Closing plugin on server side
	I0429 12:25:58.359935   17722 main.go:141] libmachine: Successfully made call to close driver server
	I0429 12:25:58.359942   17722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0429 12:25:58.383304   17722 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0429 12:25:58.425404   17722 addons.go:505] duration metric: took 698.121896ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0429 12:25:58.425460   17722 start.go:245] waiting for cluster config update ...
	I0429 12:25:58.425487   17722 start.go:254] writing updated cluster config ...
	I0429 12:25:58.541992   17722 out.go:177] 
	I0429 12:25:58.562923   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:25:58.563008   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.583885   17722 out.go:177] * Starting "ha-431000-m02" control-plane node in "ha-431000" cluster
	I0429 12:25:58.624732   17722 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:25:58.624767   17722 cache.go:56] Caching tarball of preloaded images
	I0429 12:25:58.624960   17722 preload.go:173] Found /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0429 12:25:58.624976   17722 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:25:58.625077   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:25:58.625802   17722 start.go:360] acquireMachinesLock for ha-431000-m02: {Name:mkd2435de3b02389c894a9c5afc7bb14c11265dc Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0429 12:25:58.625882   17722 start.go:364] duration metric: took 63.962µs to acquireMachinesLock for "ha-431000-m02"
	I0429 12:25:58.625908   17722 start.go:93] Provisioning new machine with config: &{Name:ha-431000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.0 ClusterName:ha-431000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.53 Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisk
s:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0429 12:25:58.625983   17722 start.go:125] createHost starting for "m02" (driver="hyperkit")
	I0429 12:25:58.647279   17722 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0429 12:25:58.647410   17722 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:25:58.647441   17722 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:25:58.657607   17722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61997
	I0429 12:25:58.657937   17722 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:25:58.658289   17722 main.go:141] libmachine: Using API Version  1
	I0429 12:25:58.658303   17722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:25:58.658522   17722 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:25:58.658625   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:25:58.658714   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:25:58.658812   17722 start.go:159] libmachine.API.Create for "ha-431000" (driver="hyperkit")
	I0429 12:25:58.658829   17722 client.go:168] LocalClient.Create starting
	I0429 12:25:58.658858   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem
	I0429 12:25:58.658895   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658904   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658941   17722 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem
	I0429 12:25:58.658966   17722 main.go:141] libmachine: Decoding PEM data...
	I0429 12:25:58.658976   17722 main.go:141] libmachine: Parsing certificate...
	I0429 12:25:58.658987   17722 main.go:141] libmachine: Running pre-create checks...
	I0429 12:25:58.658993   17722 main.go:141] libmachine: (ha-431000-m02) Calling .PreCreateCheck
	I0429 12:25:58.659060   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.659082   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:25:58.684517   17722 main.go:141] libmachine: Creating machine...
	I0429 12:25:58.684548   17722 main.go:141] libmachine: (ha-431000-m02) Calling .Create
	I0429 12:25:58.684719   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:58.685030   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.684708   17751 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:25:58.685090   17722 main.go:141] libmachine: (ha-431000-m02) Downloading /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso...
	I0429 12:25:58.880461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.880334   17751 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa...
	I0429 12:25:58.972148   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972072   17751 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk...
	I0429 12:25:58.972171   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing magic tar header
	I0429 12:25:58.972194   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Writing SSH key tar header
	I0429 12:25:58.972426   17722 main.go:141] libmachine: (ha-431000-m02) DBG | I0429 12:25:58.972393   17751 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02 ...
	I0429 12:25:59.330149   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.330167   17722 main.go:141] libmachine: (ha-431000-m02) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid
	I0429 12:25:59.330184   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Using UUID 9365fc26-c2e1-4d51-b9c4-d17102d62f4c
	I0429 12:25:59.355313   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Generated MAC ca:19:33:23:af:66
	I0429 12:25:59.355328   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000
	I0429 12:25:59.355356   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355382   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"9365fc26-c2e1-4d51-b9c4-d17102d62f4c", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0000b21b0)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0429 12:25:59.355419   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "9365fc26-c2e1-4d51-b9c4-d17102d62f4c", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/
machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"}
	I0429 12:25:59.355452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 9365fc26-c2e1-4d51-b9c4-d17102d62f4c -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/ha-431000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/tty,log=/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/bzimage,/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/initrd,earlyprintk=serial loglevel=3 co
nsole=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-431000"
	I0429 12:25:59.355465   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0429 12:25:59.358434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 DEBUG: hyperkit: Pid is 17752
	I0429 12:25:59.359551   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 0
	I0429 12:25:59.359562   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:25:59.359638   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:25:59.360574   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:25:59.360681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:25:59.360697   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:25:59.360720   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:25:59.360739   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:25:59.360752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:25:59.360762   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:25:59.360776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:25:59.360790   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:25:59.360800   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:25:59.360809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:25:59.360829   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:25:59.360841   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:25:59.360856   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:25:59.360868   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:25:59.360879   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:25:59.360894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:25:59.360903   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:25:59.360932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:25:59.360940   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:25:59.360949   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:25:59.360958   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:25:59.360966   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:25:59.360973   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:25:59.360981   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:25:59.360987   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:25:59.360995   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:25:59.361001   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:25:59.361007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:25:59.361029   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:25:59.361043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:25:59.361055   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:25:59.361062   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:25:59.361069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:25:59.361077   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:25:59.361085   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:25:59.361094   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:25:59.361102   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:25:59.361111   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:25:59.361122   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:25:59.361132   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:25:59.361139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:25:59.361146   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:25:59.361155   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:25:59.361168   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:25:59.361180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:25:59.361200   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:25:59.361215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:25:59.361230   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:25:59.361243   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:25:59.361254   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:25:59.361267   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:25:59.361281   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:25:59.361294   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:25:59.365988   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0429 12:25:59.374154   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0429 12:25:59.374983   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.375007   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.375049   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.375071   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.757805   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0429 12:25:59.757822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0429 12:25:59.872427   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0429 12:25:59.872442   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0429 12:25:59.872451   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0429 12:25:59.872459   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0429 12:25:59.873348   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0429 12:25:59.873361   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:25:59 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0429 12:26:01.362461   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 1
	I0429 12:26:01.362478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:01.362589   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:01.363375   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:01.363463   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:01.363472   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:01.363482   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:01.363495   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:01.363505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:01.363523   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:01.363540   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:01.363555   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:01.363564   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:01.363570   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:01.363580   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:01.363585   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:01.363594   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:01.363601   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:01.363606   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:01.363615   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:01.363620   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:01.363627   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:01.363641   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:01.363649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:01.363656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:01.363664   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:01.363671   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:01.363678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:01.363685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:01.363691   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:01.363698   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:01.363706   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:01.363715   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:01.363729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:01.363745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:01.363752   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:01.363758   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:01.363763   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:01.363770   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:01.363779   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:01.363786   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:01.363791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:01.363806   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:01.363819   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:01.363827   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:01.363833   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:01.363839   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:01.363847   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:01.363854   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:01.363862   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:01.363876   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:01.363887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:01.363897   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:01.363909   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:01.363929   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:01.363942   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:01.363951   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:03.364674   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 2
	I0429 12:26:03.364689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:03.364761   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:03.365538   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:03.365618   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:03.365628   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:03.365649   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:03.365656   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:03.365663   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:03.365668   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:03.365681   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:03.365689   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:03.365701   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:03.365714   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:03.365721   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:03.365734   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:03.365745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:03.365754   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:03.365760   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:03.365766   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:03.365776   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:03.365783   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:03.365789   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:03.365795   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:03.365802   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:03.365809   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:03.365820   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:03.365828   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:03.365835   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:03.365840   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:03.365850   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:03.365857   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:03.365871   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:03.365880   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:03.365887   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:03.365894   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:03.365906   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:03.365922   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:03.365930   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:03.365939   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:03.365955   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:03.365964   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:03.365971   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:03.365979   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:03.365986   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:03.365993   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:03.366000   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:03.366008   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:03.366015   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:03.366022   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:03.366028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:03.366035   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:03.366043   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:03.366050   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:03.366067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:03.366079   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:03.366088   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:05.177799   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 0
	I0429 12:26:05.177845   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 0
	I0429 12:26:05.177853   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 0
	I0429 12:26:05.201640   17722 main.go:141] libmachine: (ha-431000-m02) DBG | 2024/04/29 12:26:05 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 0
	I0429 12:26:05.367080   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 3
	I0429 12:26:05.367108   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:05.367266   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:05.368694   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:05.368836   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:05.368855   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:05.368872   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:05.368902   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:05.368917   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:05.368932   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:05.368944   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:05.368959   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:05.368998   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:05.369028   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:05.369054   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:05.369067   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:05.369081   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:05.369089   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:05.369107   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:05.369129   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:05.369139   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:05.369150   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:05.369170   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:05.369187   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:05.369198   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:05.369209   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:05.369224   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:05.369237   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:05.369248   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:05.369260   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:05.369269   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:05.369280   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:05.369290   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:05.369300   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:05.369309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:05.369317   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:05.369342   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:05.369360   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:05.369371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:05.369383   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:05.369391   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:05.369402   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:05.369412   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:05.369420   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:05.369429   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:05.369437   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:05.369446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:05.369456   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:05.369467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:05.369476   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:05.369491   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:05.369505   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:05.369516   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:05.369528   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:05.369539   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:05.369549   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:05.369556   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:07.369177   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 4
	I0429 12:26:07.369192   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:07.369292   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:07.370069   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:07.370166   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 52 entries in /var/db/dhcpd_leases!
	I0429 12:26:07.370180   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.53 HWAddress:12:79:5a:6e:b2:5a ID:1,12:79:5a:6e:b2:5a Lease:0x6631459e}
	I0429 12:26:07.370193   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.52 HWAddress:66:67:b3:3f:36:4 ID:1,66:67:b3:3f:36:4 Lease:0x6631443b}
	I0429 12:26:07.370204   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.51 HWAddress:9e:6f:85:8b:5c:e ID:1,9e:6f:85:8b:5c:e Lease:0x66314374}
	I0429 12:26:07.370215   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.50 HWAddress:ba:6:be:c9:f8:5 ID:1,ba:6:be:c9:f8:5 Lease:0x6631426a}
	I0429 12:26:07.370228   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.49 HWAddress:62:69:53:16:30:4f ID:1,62:69:53:16:30:4f Lease:0x662ff093}
	I0429 12:26:07.370247   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.48 HWAddress:46:11:4c:e6:57:93 ID:1,46:11:4c:e6:57:93 Lease:0x6631407f}
	I0429 12:26:07.370259   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.47 HWAddress:5a:bc:af:3e:b0:55 ID:1,5a:bc:af:3e:b0:55 Lease:0x66314010}
	I0429 12:26:07.370274   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.46 HWAddress:b6:f:e9:4d:8:7e ID:1,b6:f:e9:4d:8:7e Lease:0x66313e68}
	I0429 12:26:07.370287   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.45 HWAddress:ee:36:7b:95:97:94 ID:1,ee:36:7b:95:97:94 Lease:0x66313e91}
	I0429 12:26:07.370296   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.44 HWAddress:8a:a2:2c:8c:2d:6a ID:1,8a:a2:2c:8c:2d:6a Lease:0x66313d81}
	I0429 12:26:07.370304   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.43 HWAddress:2e:d2:44:53:7e:a7 ID:1,2e:d2:44:53:7e:a7 Lease:0x66313d32}
	I0429 12:26:07.370309   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.42 HWAddress:be:5f:4:d4:f:7c ID:1,be:5f:4:d4:f:7c Lease:0x66313d22}
	I0429 12:26:07.370315   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.41 HWAddress:e:3d:cb:34:20:95 ID:1,e:3d:cb:34:20:95 Lease:0x66313ccf}
	I0429 12:26:07.370344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.40 HWAddress:56:a0:46:b5:59:86 ID:1,56:a0:46:b5:59:86 Lease:0x66313c55}
	I0429 12:26:07.370354   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.39 HWAddress:e:a6:cf:ff:22:2c ID:1,e:a6:cf:ff:22:2c Lease:0x66313c31}
	I0429 12:26:07.370363   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:32:1a:1b:fd:30:cd ID:1,32:1a:1b:fd:30:cd Lease:0x66313bf3}
	I0429 12:26:07.370371   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:3e:24:de:54:ce:90 ID:1,3e:24:de:54:ce:90 Lease:0x66313bd0}
	I0429 12:26:07.370384   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:ea:e:a8:b8:a4:a6 ID:1,ea:e:a8:b8:a4:a6 Lease:0x66313b9e}
	I0429 12:26:07.370397   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:1e:b8:3b:7d:c3:ee ID:1,1e:b8:3b:7d:c3:ee Lease:0x66313b4a}
	I0429 12:26:07.370403   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:ce:ae:ff:53:4b:19 ID:1,ce:ae:ff:53:4b:19 Lease:0x662fe9a6}
	I0429 12:26:07.370411   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:2:3d:3b:5c:45:17 ID:1,2:3d:3b:5c:45:17 Lease:0x66313af3}
	I0429 12:26:07.370418   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:ae:fd:c6:ce:23:b3 ID:1,ae:fd:c6:ce:23:b3 Lease:0x66313ac7}
	I0429 12:26:07.370423   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:a:a2:da:8c:5:3d ID:1,a:a2:da:8c:5:3d Lease:0x66313ab8}
	I0429 12:26:07.370438   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:aa:5b:d1:c2:54:8d ID:1,aa:5b:d1:c2:54:8d Lease:0x66313a47}
	I0429 12:26:07.370446   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:be:60:3c:c4:35:eb ID:1,be:60:3c:c4:35:eb Lease:0x66313a09}
	I0429 12:26:07.370452   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:8a:6e:e1:22:c3:5e ID:1,8a:6e:e1:22:c3:5e Lease:0x66313981}
	I0429 12:26:07.370467   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:86:df:0:ec:42:6a ID:1,86:df:0:ec:42:6a Lease:0x66313967}
	I0429 12:26:07.370478   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ca:b1:a:10:e9:41 ID:1,ca:b1:a:10:e9:41 Lease:0x662fe7f6}
	I0429 12:26:07.370508   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:aa:c:bd:e2:7c:f8 ID:1,aa:c:bd:e2:7c:f8 Lease:0x66313935}
	I0429 12:26:07.370534   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:2:af:90:67:33:63 ID:1,2:af:90:67:33:63 Lease:0x6631391d}
	I0429 12:26:07.370550   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:ee:96:63:e6:b9:b1 ID:1,ee:96:63:e6:b9:b1 Lease:0x663138af}
	I0429 12:26:07.370567   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:1a:43:f9:1f:61:87 ID:1,1a:43:f9:1f:61:87 Lease:0x6631383f}
	I0429 12:26:07.370582   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:1e:d2:9c:70:d7:da ID:1,1e:d2:9c:70:d7:da Lease:0x66313811}
	I0429 12:26:07.370595   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:ce:f6:8f:df:6c:eb ID:1,ce:f6:8f:df:6c:eb Lease:0x6631377c}
	I0429 12:26:07.370607   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:2:87:c:2e:da:57 ID:1,2:87:c:2e:da:57 Lease:0x662fe57f}
	I0429 12:26:07.370629   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:1e:79:76:ce:d9:90 ID:1,1e:79:76:ce:d9:90 Lease:0x66313744}
	I0429 12:26:07.370639   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:be:75:40:f4:dd:f9 ID:1,be:75:40:f4:dd:f9 Lease:0x6631371b}
	I0429 12:26:07.370646   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:1e:1:99:c9:9c:88 ID:1,1e:1:99:c9:9c:88 Lease:0x662fe326}
	I0429 12:26:07.370659   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:a2:ef:f:61:56:23 ID:1,a2:ef:f:61:56:23 Lease:0x662fe30e}
	I0429 12:26:07.370669   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:a:8b:d2:79:14:82 ID:1,a:8b:d2:79:14:82 Lease:0x6631343c}
	I0429 12:26:07.370678   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:66:48:1a:b6:94:66 ID:1,66:48:1a:b6:94:66 Lease:0x66313417}
	I0429 12:26:07.370685   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:e6:e1:c7:25:13:50 ID:1,e6:e1:c7:25:13:50 Lease:0x6631335f}
	I0429 12:26:07.370695   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:a:4a:cf:ab:1b:cf ID:1,a:4a:cf:ab:1b:cf Lease:0x66313330}
	I0429 12:26:07.370729   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:ee:12:53:a8:64:2d ID:1,ee:12:53:a8:64:2d Lease:0x662fe1a4}
	I0429 12:26:07.370740   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:66:64:a7:c6:e7:be ID:1,66:64:a7:c6:e7:be Lease:0x66312bce}
	I0429 12:26:07.370745   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:36:65:eb:ad:cd:b4 ID:1,36:65:eb:ad:cd:b4 Lease:0x662fe19d}
	I0429 12:26:07.370768   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:36:80:d8:7d:70:bc ID:1,36:80:d8:7d:70:bc Lease:0x66312b4e}
	I0429 12:26:07.370775   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:fe:a3:63:d7:b5:a8 ID:1,fe:a3:63:d7:b5:a8 Lease:0x66312a40}
	I0429 12:26:07.370785   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:4a:95:2c:11:88:ea ID:1,4a:95:2c:11:88:ea Lease:0x66312979}
	I0429 12:26:07.370791   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:e:d3:37:6c:35:44 ID:1,e:d3:37:6c:35:44 Lease:0x66312871}
	I0429 12:26:07.370807   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:aa:f1:2c:67:48:1f ID:1,aa:f1:2c:67:48:1f Lease:0x66312289}
	I0429 12:26:07.370822   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name: IPAddress:192.169.0.2 HWAddress:82:2b:f6:5b:7f:bf ID:1,82:2b:f6:5b:7f:bf Lease:0x66312406}
	I0429 12:26:09.370416   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Attempt 5
	I0429 12:26:09.370434   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.370529   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.371299   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Searching for ca:19:33:23:af:66 in /var/db/dhcpd_leases ...
	I0429 12:26:09.371370   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found 53 entries in /var/db/dhcpd_leases!
	I0429 12:26:09.371380   17722 main.go:141] libmachine: (ha-431000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.54 HWAddress:ca:19:33:23:af:66 ID:1,ca:19:33:23:af:66 Lease:0x663145d0}
	I0429 12:26:09.371387   17722 main.go:141] libmachine: (ha-431000-m02) DBG | Found match: ca:19:33:23:af:66
	I0429 12:26:09.371394   17722 main.go:141] libmachine: (ha-431000-m02) DBG | IP: 192.169.0.54
	I0429 12:26:09.371443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:09.371998   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372094   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:09.372184   17722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0429 12:26:09.372192   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0429 12:26:09.372282   17722 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:26:09.372344   17722 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:26:09.373081   17722 main.go:141] libmachine: Detecting operating system of created instance...
	I0429 12:26:09.373088   17722 main.go:141] libmachine: Waiting for SSH to be available...
	I0429 12:26:09.373091   17722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0429 12:26:09.373096   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:09.373185   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:09.373265   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373344   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:09.373427   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:09.373540   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:09.373715   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:09.373722   17722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0429 12:26:09.398851   17722 main.go:141] libmachine: Error dialing TCP: ssh: handshake failed: ssh: unable to authenticate, attempted methods [none publickey], no supported methods remain
	I0429 12:26:12.461040   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.461055   17722 main.go:141] libmachine: Detecting the provisioner...
	I0429 12:26:12.461061   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.461182   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.461292   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461377   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.461470   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.461607   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.461746   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.461754   17722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0429 12:26:12.523397   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0429 12:26:12.523438   17722 main.go:141] libmachine: found compatible host: buildroot
	I0429 12:26:12.523445   17722 main.go:141] libmachine: Provisioning with buildroot...
	I0429 12:26:12.523450   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523578   17722 buildroot.go:166] provisioning hostname "ha-431000-m02"
	I0429 12:26:12.523587   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.523662   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.523743   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.523829   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.523933   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.524027   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.524160   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.524319   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.524328   17722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-431000-m02 && echo "ha-431000-m02" | sudo tee /etc/hostname
	I0429 12:26:12.595067   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-431000-m02
	
	I0429 12:26:12.595081   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.595219   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.595311   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595396   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.595487   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.595609   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.595751   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.595763   17722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-431000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-431000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-431000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0429 12:26:12.665199   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0429 12:26:12.665226   17722 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/18774-15855/.minikube CaCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/18774-15855/.minikube}
	I0429 12:26:12.665245   17722 buildroot.go:174] setting up certificates
	I0429 12:26:12.665255   17722 provision.go:84] configureAuth start
	I0429 12:26:12.665262   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetMachineName
	I0429 12:26:12.665398   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:12.665483   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.665568   17722 provision.go:143] copyHostCerts
	I0429 12:26:12.665595   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665640   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem, removing ...
	I0429 12:26:12.665646   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem
	I0429 12:26:12.665779   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/ca.pem (1082 bytes)
	I0429 12:26:12.665976   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666005   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem, removing ...
	I0429 12:26:12.666009   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem
	I0429 12:26:12.666094   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/cert.pem (1123 bytes)
	I0429 12:26:12.666237   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666265   17722 exec_runner.go:144] found /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem, removing ...
	I0429 12:26:12.666269   17722 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem
	I0429 12:26:12.666334   17722 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/18774-15855/.minikube/key.pem (1675 bytes)
	I0429 12:26:12.666486   17722 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca-key.pem org=jenkins.ha-431000-m02 san=[127.0.0.1 192.169.0.54 ha-431000-m02 localhost minikube]
	I0429 12:26:12.788915   17722 provision.go:177] copyRemoteCerts
	I0429 12:26:12.788965   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0429 12:26:12.788978   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.789131   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.789238   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.789318   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.789407   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:12.827773   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0429 12:26:12.827848   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0429 12:26:12.847569   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0429 12:26:12.847650   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0429 12:26:12.867106   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0429 12:26:12.867164   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0429 12:26:12.886794   17722 provision.go:87] duration metric: took 221.533222ms to configureAuth
	I0429 12:26:12.886806   17722 buildroot.go:189] setting minikube options for container-runtime
	I0429 12:26:12.886945   17722 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:26:12.886958   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:12.887088   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.887189   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.887288   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.887445   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.887559   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.887688   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.887695   17722 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0429 12:26:12.949449   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0429 12:26:12.960507   17722 buildroot.go:70] root file system type: tmpfs
	I0429 12:26:12.960630   17722 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0429 12:26:12.960650   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:12.960811   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:12.960918   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961031   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:12.961144   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:12.961314   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:12.961496   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:12.961559   17722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.53"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0429 12:26:13.034441   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.53
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0429 12:26:13.034459   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:13.034584   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:13.034670   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034771   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:13.034868   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:13.034990   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:13.035131   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:13.035143   17722 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0429 12:26:14.527695   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0429 12:26:14.527713   17722 main.go:141] libmachine: Checking connection to Docker...
	I0429 12:26:14.527719   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetURL
	I0429 12:26:14.527859   17722 main.go:141] libmachine: Docker is up and running!
	I0429 12:26:14.527867   17722 main.go:141] libmachine: Reticulating splines...
	I0429 12:26:14.527872   17722 client.go:171] duration metric: took 15.869221462s to LocalClient.Create
	I0429 12:26:14.527883   17722 start.go:167] duration metric: took 15.869255705s to libmachine.API.Create "ha-431000"
	I0429 12:26:14.527888   17722 start.go:293] postStartSetup for "ha-431000-m02" (driver="hyperkit")
	I0429 12:26:14.527895   17722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0429 12:26:14.527905   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.528059   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0429 12:26:14.528083   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.528190   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.528277   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.528371   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.528462   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.571874   17722 ssh_runner.go:195] Run: cat /etc/os-release
	I0429 12:26:14.575715   17722 info.go:137] Remote host: Buildroot 2023.02.9
	I0429 12:26:14.575725   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/addons for local assets ...
	I0429 12:26:14.575820   17722 filesync.go:126] Scanning /Users/jenkins/minikube-integration/18774-15855/.minikube/files for local assets ...
	I0429 12:26:14.575979   17722 filesync.go:149] local asset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> 163012.pem in /etc/ssl/certs
	I0429 12:26:14.575985   17722 vm_assets.go:163] NewFileAsset: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem -> /etc/ssl/certs/163012.pem
	I0429 12:26:14.576144   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0429 12:26:14.585193   17722 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/ssl/certs/163012.pem --> /etc/ssl/certs/163012.pem (1708 bytes)
	I0429 12:26:14.617800   17722 start.go:296] duration metric: took 89.904272ms for postStartSetup
	I0429 12:26:14.617830   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetConfigRaw
	I0429 12:26:14.618443   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.618595   17722 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/ha-431000/config.json ...
	I0429 12:26:14.619362   17722 start.go:128] duration metric: took 15.993555623s to createHost
	I0429 12:26:14.619378   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.619471   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.619554   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619637   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.619749   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.619852   17722 main.go:141] libmachine: Using SSH client type: native
	I0429 12:26:14.619986   17722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xc897b80] 0xc89a8e0 <nil>  [] 0s} 192.169.0.54 22 <nil> <nil>}
	I0429 12:26:14.619993   17722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0429 12:26:14.684608   17722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1714418773.901729701
	
	I0429 12:26:14.684618   17722 fix.go:216] guest clock: 1714418773.901729701
	I0429 12:26:14.684638   17722 fix.go:229] Guest: 2024-04-29 12:26:13.901729701 -0700 PDT Remote: 2024-04-29 12:26:14.619372 -0700 PDT m=+66.737316646 (delta=-717.642299ms)
	I0429 12:26:14.684650   17722 fix.go:200] guest clock delta is within tolerance: -717.642299ms
	I0429 12:26:14.684654   17722 start.go:83] releasing machines lock for "ha-431000-m02", held for 16.058950363s
	I0429 12:26:14.684671   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.684806   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:26:14.706148   17722 out.go:177] * Found network options:
	I0429 12:26:14.727219   17722 out.go:177]   - NO_PROXY=192.169.0.53
	W0429 12:26:14.749052   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.749119   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.749950   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750211   17722 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:26:14.750321   17722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0429 12:26:14.750357   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	W0429 12:26:14.750432   17722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0429 12:26:14.750551   17722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0429 12:26:14.750572   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:26:14.750573   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750783   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:26:14.750814   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751054   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:26:14.751092   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751333   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	I0429 12:26:14.751365   17722 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:26:14.751506   17722 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0429 12:26:14.787217   17722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0429 12:26:14.787277   17722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0429 12:26:14.848201   17722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0429 12:26:14.848227   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:14.848358   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:14.865009   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0429 12:26:14.874483   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0429 12:26:14.883536   17722 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0429 12:26:14.883585   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0429 12:26:14.892833   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.901853   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0429 12:26:14.910642   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0429 12:26:14.919451   17722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0429 12:26:14.928628   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0429 12:26:14.937504   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0429 12:26:14.946508   17722 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0429 12:26:14.955539   17722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0429 12:26:14.963731   17722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0429 12:26:14.971817   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.075369   17722 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0429 12:26:15.095140   17722 start.go:494] detecting cgroup driver to use...
	I0429 12:26:15.095227   17722 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0429 12:26:15.114227   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.131639   17722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0429 12:26:15.152206   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0429 12:26:15.163545   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.173829   17722 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0429 12:26:15.196676   17722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0429 12:26:15.207120   17722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0429 12:26:15.222241   17722 ssh_runner.go:195] Run: which cri-dockerd
	I0429 12:26:15.225235   17722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0429 12:26:15.232398   17722 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0429 12:26:15.245911   17722 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0429 12:26:15.342741   17722 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0429 12:26:15.443507   17722 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0429 12:26:15.443528   17722 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0429 12:26:15.456917   17722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0429 12:26:15.554009   17722 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0429 12:27:16.592140   17722 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.038821319s)
	I0429 12:27:16.592205   17722 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0429 12:27:16.627966   17722 out.go:177] 
	W0429 12:27:16.650769   17722 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Apr 29 19:26:12 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.562442558Z" level=info msg="Starting up"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.563076350Z" level=info msg="containerd not running, starting managed containerd"
	Apr 29 19:26:12 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:12.567928747Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=527
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.583689216Z" level=info msg="starting containerd" revision=926c9586fe4a6236699318391cd44976a98e31f1 version=v1.7.15
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597320910Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597367066Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597410988Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597446593Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597506031Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597540413Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597805772Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597892122Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597933427Z" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.devmapper" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.597944227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598009110Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.598199858Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599844343Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599884447Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.599970866Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600003444Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600098323Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600162313Z" level=warning msg="could not use snapshotter devmapper in metadata plugin" error="devmapper not configured"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.600172836Z" level=info msg="metadata content store policy set" policy=shared
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602841911Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602899413Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602913390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602923442Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.602932201Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603001707Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603176643Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603272136Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603306202Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603317268Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603331817Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603343600Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603351894Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603363377Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603373358Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603384153Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603392284Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603399425Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603411790Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603424697Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603433133Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603441352Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603448955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603456927Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603465082Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603473249Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603488023Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603499629Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603507180Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603514591Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603523215Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603533846Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603547413Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603558028Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603566468Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603593185Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603603060Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="no OpenTelemetry endpoint: skip plugin" type=io.containerd.tracing.processor.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603610561Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603617284Z" level=info msg="skipping tracing processor initialization (no tracing plugin)" error="no OpenTelemetry endpoint: skip plugin"
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603705135Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603740552Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603750151Z" level=info msg="NRI interface is disabled by configuration."
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603879153Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603936791Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603967392Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Apr 29 19:26:12 ha-431000-m02 dockerd[527]: time="2024-04-29T19:26:12.603999139Z" level=info msg="containerd successfully booted in 0.021874s"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.594997677Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.598794560Z" level=info msg="Loading containers: start."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.706853403Z" level=info msg="Loading containers: done."
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713747539Z" level=info msg="Docker daemon" commit=7cef0d9 containerd-snapshotter=false storage-driver=overlay2 version=26.0.2
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.713886174Z" level=info msg="Daemon has completed initialization"
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.742027280Z" level=info msg="API listen on /var/run/docker.sock"
	Apr 29 19:26:13 ha-431000-m02 systemd[1]: Started Docker Application Container Engine.
	Apr 29 19:26:13 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:13.744424276Z" level=info msg="API listen on [::]:2376"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.784502897Z" level=info msg="Processing signal 'terminated'"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785439200Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785771501Z" level=info msg="Daemon shutdown complete"
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785832350Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Apr 29 19:26:14 ha-431000-m02 dockerd[521]: time="2024-04-29T19:26:14.785935215Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Apr 29 19:26:14 ha-431000-m02 systemd[1]: Stopping Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: docker.service: Deactivated successfully.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Stopped Docker Application Container Engine.
	Apr 29 19:26:15 ha-431000-m02 systemd[1]: Starting Docker Application Container Engine...
	Apr 29 19:26:15 ha-431000-m02 dockerd[864]: time="2024-04-29T19:26:15.839252866Z" level=info msg="Starting up"
	Apr 29 19:27:16 ha-431000-m02 dockerd[864]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: docker.service: Failed with result 'exit-code'.
	Apr 29 19:27:16 ha-431000-m02 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0429 12:27:16.650883   17722 out.go:239] * 
	W0429 12:27:16.652130   17722 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0429 12:27:16.735182   17722 out.go:177] 
	
	
	==> Docker <==
	Apr 29 19:27:21 ha-431000 dockerd[1119]: time="2024-04-29T19:27:21.826775463Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:27:21 ha-431000 cri-dockerd[1017]: time="2024-04-29T19:27:21Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/a77f2c4e9a00b659bd23a71b2e06a6597386312621d0ef4b80802241e48f11f3/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Apr 29 19:27:23 ha-431000 cri-dockerd[1017]: time="2024-04-29T19:27:23Z" level=info msg="Stop pulling image gcr.io/k8s-minikube/busybox:1.28: Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:1.28"
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.131163360Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.131274726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.131594744Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:27:23 ha-431000 dockerd[1119]: time="2024-04-29T19:27:23.132740553Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Apr 29 19:39:17 ha-431000 dockerd[1113]: 2024/04/29 19:39:17 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:18 ha-431000 dockerd[1113]: 2024/04/29 19:39:18 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:19 ha-431000 dockerd[1113]: 2024/04/29 19:39:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:19 ha-431000 dockerd[1113]: 2024/04/29 19:39:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:19 ha-431000 dockerd[1113]: 2024/04/29 19:39:19 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:21 ha-431000 dockerd[1113]: 2024/04/29 19:39:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:21 ha-431000 dockerd[1113]: 2024/04/29 19:39:21 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:22 ha-431000 dockerd[1113]: 2024/04/29 19:39:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:22 ha-431000 dockerd[1113]: 2024/04/29 19:39:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:22 ha-431000 dockerd[1113]: 2024/04/29 19:39:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:22 ha-431000 dockerd[1113]: 2024/04/29 19:39:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:22 ha-431000 dockerd[1113]: 2024/04/29 19:39:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:22 ha-431000 dockerd[1113]: 2024/04/29 19:39:22 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	Apr 29 19:39:23 ha-431000 dockerd[1113]: 2024/04/29 19:39:23 http: superfluous response.WriteHeader call from go.opentelemetry.io/contrib/instrumentation/net/http/otelhttp.(*respWriterWrapper).WriteHeader (wrap.go:98)
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	d991ad36f678a       gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12   12 minutes ago      Running             busybox                   0                   a77f2c4e9a00b       busybox-fc5497c4f-hb957
	1eccc3b061df0       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   1d309f3bad709       coredns-7db6d8ff4d-42q4w
	5fc9c9934ce6d       cbb01a7bd410d                                                                                         14 minutes ago      Running             coredns                   0                   b653ad03a91d0       coredns-7db6d8ff4d-vpt8j
	2fe73ca28b695       6e38f40d628db                                                                                         14 minutes ago      Running             storage-provisioner       0                   5910bbb306534       storage-provisioner
	2fc7f6cf26c66       kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988              14 minutes ago      Running             kindnet-cni               0                   77287d05e6e30       kindnet-7v2x2
	dd7e6dcf8c180       a0bf559e280cf                                                                                         14 minutes ago      Running             kube-proxy                0                   68ae02827d8f8       kube-proxy-24zxg
	25e21b3064d0f       ghcr.io/kube-vip/kube-vip@sha256:360f0c5d02322075cc80edb9e4e0d2171e941e55072184f1f902203fafc81d0f     14 minutes ago      Running             kube-vip                  0                   eb02781b4cb9e       kube-vip-ha-431000
	253cb761306be       c42f13656d0b2                                                                                         14 minutes ago      Running             kube-apiserver            0                   0b29b8a237387       kube-apiserver-ha-431000
	8689b2595f852       c7aad43836fa5                                                                                         14 minutes ago      Running             kube-controller-manager   0                   9353c1ab6af44       kube-controller-manager-ha-431000
	395f732c4ecd8       259c8277fcbbc                                                                                         14 minutes ago      Running             kube-scheduler            0                   b5d69e14edefa       kube-scheduler-ha-431000
	f0af9c768f874       3861cfcd7c04c                                                                                         14 minutes ago      Running             etcd                      0                   d3b2bec8afdb2       etcd-ha-431000
	
	
	==> coredns [1eccc3b061df] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:59271 - 28720 "HINFO IN 7673301142508958601.5024328155936603833. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011541653s
	[INFO] 10.244.0.4:50854 - 3 "AAAA IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 140 0.054331652s
	[INFO] 10.244.0.4:59171 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 140 0.013535598s
	[INFO] 10.244.0.4:59678 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000205886s
	[INFO] 10.244.0.4:40138 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.017345083s
	[INFO] 10.244.0.4:49479 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 111 0.01082099s
	[INFO] 10.244.0.4:35756 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000317739s
	[INFO] 10.244.0.4:51111 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000143818s
	[INFO] 10.244.0.4:50301 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000055524s
	[INFO] 10.244.0.4:35729 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000076289s
	[INFO] 10.244.0.4:33262 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000085602s
	
	
	==> coredns [5fc9c9934ce6] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:46682 - 61825 "HINFO IN 2083561514213073366.6289946638194081086. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.012255792s
	[INFO] 10.244.0.4:44627 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000165757s
	[INFO] 10.244.0.4:54619 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,rd,ra 60 0.011057673s
	[INFO] 10.244.0.4:40906 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000248422s
	[INFO] 10.244.0.4:38496 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000162208s
	[INFO] 10.244.0.4:49726 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000162889s
	[INFO] 10.244.0.4:53369 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000076076s
	[INFO] 10.244.0.4:38793 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000118909s
	[INFO] 10.244.0.4:38671 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000049671s
	[INFO] 10.244.0.4:35662 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000117262s
	[INFO] 10.244.0.4:56289 - 5 "PTR IN 1.0.169.192.in-addr.arpa. udp 42 false 512" NOERROR qr,aa,rd 102 0.000077023s
	
	
	==> describe nodes <==
	Name:               ha-431000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_04_29T12_25_45_0700
	                    minikube.k8s.io/version=v1.33.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:25:42 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:40:11 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:25:41 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:38:00 +0000   Mon, 29 Apr 2024 19:26:06 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.53
	  Hostname:    ha-431000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 0347e3fb0470420c86cb19dc5e37eb8c
	  System UUID:                24bd46dd-0000-0000-965c-91e20eb4b3f3
	  Boot ID:                    5a51c8ef-7deb-418c-a18b-22ccf6b75cab
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-hb957              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-42q4w             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-7db6d8ff4d-vpt8j             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-431000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-7v2x2                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-431000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-431000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-24zxg                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-431000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-431000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 14m   kube-proxy       
	  Normal  Starting                 14m   kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  14m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  14m   kubelet          Node ha-431000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    14m   kubelet          Node ha-431000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     14m   kubelet          Node ha-431000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           14m   node-controller  Node ha-431000 event: Registered Node ha-431000 in Controller
	  Normal  NodeReady                14m   kubelet          Node ha-431000 status is now: NodeReady
	
	
	Name:               ha-431000-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-431000-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=2cfd4287855d1061f3afd2cc80f438e391f2ea1e
	                    minikube.k8s.io/name=ha-431000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_04_29T12_39_54_0700
	                    minikube.k8s.io/version=v1.33.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 29 Apr 2024 19:39:54 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-431000-m03
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 29 Apr 2024 19:40:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 29 Apr 2024 19:40:08 +0000   Mon, 29 Apr 2024 19:39:54 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 29 Apr 2024 19:40:08 +0000   Mon, 29 Apr 2024 19:39:54 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 29 Apr 2024 19:40:08 +0000   Mon, 29 Apr 2024 19:39:54 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 29 Apr 2024 19:40:08 +0000   Mon, 29 Apr 2024 19:40:08 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.55
	  Hostname:    ha-431000-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 d1d590929b5f48d8a88cd6e49c1fd89f
	  System UUID:                3a894d81-0000-0000-8bd3-c96243d71541
	  Boot ID:                    ac3df235-3e10-4ab3-8ae8-83069607c3dd
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://26.0.2
	  Kubelet Version:            v1.30.0
	  Kube-Proxy Version:         v1.30.0
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-lrnxn    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-t7t47              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18s
	  kube-system                 kube-proxy-z92dk           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 11s                kube-proxy       
	  Normal  NodeHasSufficientMemory  18s (x2 over 18s)  kubelet          Node ha-431000-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18s (x2 over 18s)  kubelet          Node ha-431000-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18s (x2 over 18s)  kubelet          Node ha-431000-m03 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           14s                node-controller  Node ha-431000-m03 event: Registered Node ha-431000-m03 in Controller
	  Normal  NodeReady                4s                 kubelet          Node ha-431000-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +2.683718] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.250560] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000007] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +1.108677] systemd-fstab-generator[500]: Ignoring "noauto" option for root device
	[  +0.110997] systemd-fstab-generator[512]: Ignoring "noauto" option for root device
	[  +1.889303] systemd-fstab-generator[735]: Ignoring "noauto" option for root device
	[  +0.059318] kauditd_printk_skb: 67 callbacks suppressed
	[  +0.256507] systemd-fstab-generator[770]: Ignoring "noauto" option for root device
	[  +0.100729] systemd-fstab-generator[782]: Ignoring "noauto" option for root device
	[  +0.117080] systemd-fstab-generator[796]: Ignoring "noauto" option for root device
	[  +2.384127] systemd-fstab-generator[970]: Ignoring "noauto" option for root device
	[  +0.098660] systemd-fstab-generator[982]: Ignoring "noauto" option for root device
	[  +0.104412] systemd-fstab-generator[994]: Ignoring "noauto" option for root device
	[  +0.132603] systemd-fstab-generator[1009]: Ignoring "noauto" option for root device
	[  +4.124699] systemd-fstab-generator[1104]: Ignoring "noauto" option for root device
	[  +0.056838] kauditd_printk_skb: 205 callbacks suppressed
	[  +2.446594] systemd-fstab-generator[1304]: Ignoring "noauto" option for root device
	[  +3.800237] systemd-fstab-generator[1492]: Ignoring "noauto" option for root device
	[  +0.050917] kauditd_printk_skb: 73 callbacks suppressed
	[  +7.212746] systemd-fstab-generator[1990]: Ignoring "noauto" option for root device
	[  +0.076591] kauditd_printk_skb: 72 callbacks suppressed
	[ +14.491572] kauditd_printk_skb: 12 callbacks suppressed
	[Apr29 19:26] kauditd_printk_skb: 38 callbacks suppressed
	[Apr29 19:27] kauditd_printk_skb: 26 callbacks suppressed
	
	
	==> etcd [f0af9c768f87] <==
	{"level":"info","ts":"2024-04-29T19:25:39.452287Z","caller":"embed/etcd.go:277","msg":"now serving peer/client/metrics","local-member-id":"9dc59dab1a1aba0","initial-advertise-peer-urls":["https://192.169.0.53:2380"],"listen-peer-urls":["https://192.169.0.53:2380"],"advertise-client-urls":["https://192.169.0.53:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.169.0.53:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-04-29T19:25:39.452539Z","caller":"embed/etcd.go:857","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-04-29T19:25:39.452814Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.169.0.53:2380"}
	{"level":"info","ts":"2024-04-29T19:25:39.4529Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.169.0.53:2380"}
	{"level":"info","ts":"2024-04-29T19:25:40.22976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 is starting a new election at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.229854Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.22989Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 received MsgPreVoteResp from 9dc59dab1a1aba0 at term 1"}
	{"level":"info","ts":"2024-04-29T19:25:40.229907Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became candidate at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.22992Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 received MsgVoteResp from 9dc59dab1a1aba0 at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.229934Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"9dc59dab1a1aba0 became leader at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.229975Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 9dc59dab1a1aba0 elected leader 9dc59dab1a1aba0 at term 2"}
	{"level":"info","ts":"2024-04-29T19:25:40.232269Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"9dc59dab1a1aba0","local-member-attributes":"{Name:ha-431000 ClientURLs:[https://192.169.0.53:2379]}","request-path":"/0/members/9dc59dab1a1aba0/attributes","cluster-id":"ab7009b4269bb52","publish-timeout":"7s"}
	{"level":"info","ts":"2024-04-29T19:25:40.232472Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:25:40.232823Z","caller":"etcdserver/server.go:2578","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.233029Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-04-29T19:25:40.233086Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-04-29T19:25:40.236119Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-04-29T19:25:40.237354Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-04-29T19:25:40.237915Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.53:2379"}
	{"level":"info","ts":"2024-04-29T19:25:40.284926Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"ab7009b4269bb52","local-member-id":"9dc59dab1a1aba0","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.285192Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:25:40.285242Z","caller":"etcdserver/server.go:2602","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-04-29T19:35:40.294897Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":967}
	{"level":"info","ts":"2024-04-29T19:35:40.297588Z","caller":"mvcc/kvstore_compaction.go:68","msg":"finished scheduled compaction","compact-revision":967,"took":"2.224451ms","hash":4246457979,"current-db-size-bytes":2424832,"current-db-size":"2.4 MB","current-db-size-in-use-bytes":2424832,"current-db-size-in-use":"2.4 MB"}
	{"level":"info","ts":"2024-04-29T19:35:40.297735Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4246457979,"revision":967,"compact-revision":-1}
	
	
	==> kernel <==
	 19:40:13 up 15 min,  0 users,  load average: 0.19, 0.16, 0.10
	Linux ha-431000 5.10.207 #1 SMP Mon Apr 22 03:02:02 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [2fc7f6cf26c6] <==
	I0429 19:38:43.123849       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:43.124059       1 main.go:227] handling current node
	I0429 19:38:53.135472       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:38:53.135690       1 main.go:227] handling current node
	I0429 19:39:03.141037       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:03.141082       1 main.go:227] handling current node
	I0429 19:39:13.153671       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:13.153913       1 main.go:227] handling current node
	I0429 19:39:23.162284       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:23.162333       1 main.go:227] handling current node
	I0429 19:39:33.164980       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:33.165014       1 main.go:227] handling current node
	I0429 19:39:43.173255       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:43.173302       1 main.go:227] handling current node
	I0429 19:39:53.176953       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:39:53.176992       1 main.go:227] handling current node
	I0429 19:40:03.179616       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:40:03.179652       1 main.go:227] handling current node
	I0429 19:40:03.179659       1 main.go:223] Handling node with IPs: map[192.169.0.55:{}]
	I0429 19:40:03.179664       1 main.go:250] Node ha-431000-m03 has CIDR [10.244.1.0/24] 
	I0429 19:40:03.179804       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.1.0/24 Src: <nil> Gw: 192.169.0.55 Flags: [] Table: 0} 
	I0429 19:40:13.191657       1 main.go:223] Handling node with IPs: map[192.169.0.53:{}]
	I0429 19:40:13.191691       1 main.go:227] handling current node
	I0429 19:40:13.191699       1 main.go:223] Handling node with IPs: map[192.169.0.55:{}]
	I0429 19:40:13.191703       1 main.go:250] Node ha-431000-m03 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [253cb761306b] <==
	I0429 19:25:41.298355       1 controller.go:615] quota admission added evaluator for: namespaces
	I0429 19:25:41.298726       1 aggregator.go:165] initial CRD sync complete...
	I0429 19:25:41.298734       1 autoregister_controller.go:141] Starting autoregister controller
	I0429 19:25:41.298739       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0429 19:25:41.298742       1 cache.go:39] Caches are synced for autoregister controller
	I0429 19:25:41.341235       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0429 19:25:42.199810       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0429 19:25:42.203107       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0429 19:25:42.203598       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0429 19:25:42.475929       1 controller.go:615] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0429 19:25:42.501043       1 controller.go:615] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0429 19:25:42.531435       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0429 19:25:42.536355       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.169.0.53]
	I0429 19:25:42.537008       1 controller.go:615] quota admission added evaluator for: endpoints
	I0429 19:25:42.539635       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0429 19:25:43.279622       1 controller.go:615] quota admission added evaluator for: serviceaccounts
	I0429 19:25:44.788367       1 controller.go:615] quota admission added evaluator for: deployments.apps
	I0429 19:25:44.794388       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0429 19:25:44.803386       1 controller.go:615] quota admission added evaluator for: daemonsets.apps
	I0429 19:25:58.079997       1 controller.go:615] quota admission added evaluator for: replicasets.apps
	I0429 19:25:58.240228       1 controller.go:615] quota admission added evaluator for: controllerrevisions.apps
	E0429 19:39:15.945064       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62063: use of closed network connection
	E0429 19:39:16.427565       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62071: use of closed network connection
	E0429 19:39:16.885095       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62079: use of closed network connection
	E0429 19:39:20.456182       1 conn.go:339] Error on socket receive: read tcp 192.169.0.254:8443->192.169.0.1:62115: use of closed network connection
	
	
	==> kube-controller-manager [8689b2595f85] <==
	I0429 19:26:06.946975       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="37.17µs"
	I0429 19:26:06.950414       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="27.785µs"
	I0429 19:26:06.958967       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="39.397µs"
	I0429 19:26:06.969829       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="58.121µs"
	I0429 19:26:07.886482       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="28.564µs"
	I0429 19:26:07.917554       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.073261ms"
	I0429 19:26:07.917615       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="23.945µs"
	I0429 19:26:07.935068       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="6.658277ms"
	I0429 19:26:07.935131       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="26.868µs"
	I0429 19:26:08.387295       1 node_lifecycle_controller.go:1050] "Controller detected that some Nodes are Ready. Exiting master disruption mode" logger="node-lifecycle-controller"
	I0429 19:27:20.003021       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="72.397546ms"
	I0429 19:27:20.009762       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="6.493528ms"
	I0429 19:27:20.018226       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="8.419727ms"
	I0429 19:27:20.018502       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="31.67µs"
	I0429 19:27:23.408503       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.094694ms"
	I0429 19:27:23.408538       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="18.077µs"
	I0429 19:39:54.235353       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-431000-m03\" does not exist"
	I0429 19:39:54.241511       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-431000-m03" podCIDRs=["10.244.1.0/24"]
	I0429 19:39:58.530080       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-431000-m03"
	I0429 19:40:08.979104       1 topologycache.go:237] "Can't get CPU or zone information for node" logger="endpointslice-controller" node="ha-431000-m03"
	I0429 19:40:08.991030       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="38.354µs"
	I0429 19:40:08.991470       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="12.991µs"
	I0429 19:40:08.997634       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="57.191µs"
	I0429 19:40:11.328596       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="3.227194ms"
	I0429 19:40:11.329190       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="20.292µs"
	
	
	==> kube-proxy [dd7e6dcf8c18] <==
	I0429 19:25:59.229445       1 server_linux.go:69] "Using iptables proxy"
	I0429 19:25:59.241484       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.53"]
	I0429 19:25:59.271201       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0429 19:25:59.271245       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0429 19:25:59.271262       1 server_linux.go:165] "Using iptables Proxier"
	I0429 19:25:59.273894       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0429 19:25:59.274358       1 server.go:872] "Version info" version="v1.30.0"
	I0429 19:25:59.274426       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0429 19:25:59.275610       1 config.go:192] "Starting service config controller"
	I0429 19:25:59.275903       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0429 19:25:59.275968       1 config.go:101] "Starting endpoint slice config controller"
	I0429 19:25:59.276082       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0429 19:25:59.276583       1 config.go:319] "Starting node config controller"
	I0429 19:25:59.277663       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0429 19:25:59.376395       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0429 19:25:59.376459       1 shared_informer.go:320] Caches are synced for service config
	I0429 19:25:59.377863       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [395f732c4ecd] <==
	W0429 19:25:41.293554       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:25:41.294406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:25:41.294738       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0429 19:25:41.294772       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0429 19:25:41.295245       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0429 19:25:41.295295       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0429 19:25:41.295534       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:25:41.295865       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0429 19:25:41.296096       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0429 19:25:41.296336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0429 19:25:41.296557       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0429 19:25:41.296590       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0429 19:25:41.296636       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:25:41.296645       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:25:41.297185       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0429 19:25:41.297218       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0429 19:25:41.298191       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0429 19:25:41.298225       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0429 19:25:42.168872       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0429 19:25:42.168946       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0429 19:25:42.211386       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0429 19:25:42.211426       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0429 19:25:42.376364       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0429 19:25:42.376406       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0429 19:25:42.882998       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Apr 29 19:35:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:35:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:35:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:35:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:36:44 ha-431000 kubelet[1997]: E0429 19:36:44.684785    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:36:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:36:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:36:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:36:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:37:44 ha-431000 kubelet[1997]: E0429 19:37:44.685252    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:37:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:37:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:37:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:37:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:38:44 ha-431000 kubelet[1997]: E0429 19:38:44.684405    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:38:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:38:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:38:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:38:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Apr 29 19:39:20 ha-431000 kubelet[1997]: E0429 19:39:20.456491    1997 upgradeaware.go:427] Error proxying data from client to backend: readfrom tcp 127.0.0.1:55736->127.0.0.1:43599: write tcp 127.0.0.1:55736->127.0.0.1:43599: write: broken pipe
	Apr 29 19:39:44 ha-431000 kubelet[1997]: E0429 19:39:44.686677    1997 iptables.go:577] "Could not set up iptables canary" err=<
	Apr 29 19:39:44 ha-431000 kubelet[1997]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Apr 29 19:39:44 ha-431000 kubelet[1997]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Apr 29 19:39:44 ha-431000 kubelet[1997]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Apr 29 19:39:44 ha-431000 kubelet[1997]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-431000 -n ha-431000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-431000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: busybox-fc5497c4f-fwxpx
helpers_test.go:274: ======> post-mortem[TestMultiControlPlane/serial/AddWorkerNode]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context ha-431000 describe pod busybox-fc5497c4f-fwxpx
helpers_test.go:282: (dbg) kubectl --context ha-431000 describe pod busybox-fc5497c4f-fwxpx:

                                                
                                                
-- stdout --
	Name:             busybox-fc5497c4f-fwxpx
	Namespace:        default
	Priority:         0
	Service Account:  default
	Node:             <none>
	Labels:           app=busybox
	                  pod-template-hash=fc5497c4f
	Annotations:      <none>
	Status:           Pending
	IP:               
	IPs:              <none>
	Controlled By:    ReplicaSet/busybox-fc5497c4f
	Containers:
	  busybox:
	    Image:      gcr.io/k8s-minikube/busybox:1.28
	    Port:       <none>
	    Host Port:  <none>
	    Command:
	      sleep
	      3600
	    Environment:  <none>
	    Mounts:
	      /var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-4bjrr (ro)
	Conditions:
	  Type           Status
	  PodScheduled   False 
	Volumes:
	  kube-api-access-4bjrr:
	    Type:                    Projected (a volume that contains injected data from multiple sources)
	    TokenExpirationSeconds:  3607
	    ConfigMapName:           kube-root-ca.crt
	    ConfigMapOptional:       <nil>
	    DownwardAPI:             true
	QoS Class:                   BestEffort
	Node-Selectors:              <none>
	Tolerations:                 node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
	                             node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
	Events:
	  Type     Reason            Age                  From               Message
	  ----     ------            ----                 ----               -------
	  Warning  FailedScheduling  2m28s (x4 over 12m)  default-scheduler  0/1 nodes are available: 1 node(s) didn't match pod anti-affinity rules. preemption: 0/1 nodes are available: 1 No preemption victims found for incoming pod.
	  Warning  FailedScheduling  5s                   default-scheduler  0/2 nodes are available: 2 node(s) didn't match pod anti-affinity rules. preemption: 0/2 nodes are available: 2 No preemption victims found for incoming pod.

                                                
                                                
-- /stdout --
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddWorkerNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddWorkerNode (50.67s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (227.5s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:281: (dbg) Done: out/minikube-darwin-amd64 profile list --output json: (2m32.405074999s)
ha_test.go:304: expected profile "ha-431000" in json of 'profile list' to include 4 nodes but have 3 nodes. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-431000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-431000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\":1,\"APIServe
rPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-431000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.53\",\"Port\":8443,\"KubernetesVersion
\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.54\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.55\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":false,\"olm\":f
alse,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":false,\"Custom
QemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
ha_test.go:307: expected profile "ha-431000" in json of 'profile list' to have "HAppy" status but have "Stopped" status. got *"{\"invalid\":[],\"valid\":[{\"Name\":\"ha-431000\",\"Status\":\"Stopped\",\"Config\":{\"Name\":\"ha-431000\",\"KeepContext\":false,\"EmbedCerts\":false,\"MinikubeISO\":\"https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso\",\"KicBaseImage\":\"gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e\",\"Memory\":2200,\"CPUs\":2,\"DiskSize\":20000,\"Driver\":\"hyperkit\",\"HyperkitVpnKitSock\":\"\",\"HyperkitVSockPorts\":[],\"DockerEnv\":null,\"ContainerVolumeMounts\":null,\"InsecureRegistry\":null,\"RegistryMirror\":[],\"HostOnlyCIDR\":\"192.168.59.1/24\",\"HypervVirtualSwitch\":\"\",\"HypervUseExternalSwitch\":false,\"HypervExternalAdapter\":\"\",\"KVMNetwork\":\"default\",\"KVMQemuURI\":\"qemu:///system\",\"KVMGPU\":false,\"KVMHidden\":false,\"KVMNUMACount\"
:1,\"APIServerPort\":8443,\"DockerOpt\":null,\"DisableDriverMounts\":false,\"NFSShare\":[],\"NFSSharesRoot\":\"/nfsshares\",\"UUID\":\"\",\"NoVTXCheck\":false,\"DNSProxy\":false,\"HostDNSResolver\":true,\"HostOnlyNicType\":\"virtio\",\"NatNicType\":\"virtio\",\"SSHIPAddress\":\"\",\"SSHUser\":\"root\",\"SSHKey\":\"\",\"SSHPort\":22,\"KubernetesConfig\":{\"KubernetesVersion\":\"v1.30.0\",\"ClusterName\":\"ha-431000\",\"Namespace\":\"default\",\"APIServerHAVIP\":\"192.169.0.254\",\"APIServerName\":\"minikubeCA\",\"APIServerNames\":null,\"APIServerIPs\":null,\"DNSDomain\":\"cluster.local\",\"ContainerRuntime\":\"docker\",\"CRISocket\":\"\",\"NetworkPlugin\":\"cni\",\"FeatureGates\":\"\",\"ServiceCIDR\":\"10.96.0.0/12\",\"ImageRepository\":\"\",\"LoadBalancerStartIP\":\"\",\"LoadBalancerEndIP\":\"\",\"CustomIngressCert\":\"\",\"RegistryAliases\":\"\",\"ExtraOptions\":null,\"ShouldLoadCachedImages\":true,\"EnableDefaultCNI\":false,\"CNI\":\"\"},\"Nodes\":[{\"Name\":\"\",\"IP\":\"192.169.0.53\",\"Port\":8443,\"Kube
rnetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m02\",\"IP\":\"192.169.0.54\",\"Port\":8443,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"docker\",\"ControlPlane\":true,\"Worker\":true},{\"Name\":\"m03\",\"IP\":\"192.169.0.55\",\"Port\":0,\"KubernetesVersion\":\"v1.30.0\",\"ContainerRuntime\":\"\",\"ControlPlane\":false,\"Worker\":true}],\"Addons\":{\"ambassador\":false,\"auto-pause\":false,\"cloud-spanner\":false,\"csi-hostpath-driver\":false,\"dashboard\":false,\"default-storageclass\":false,\"efk\":false,\"freshpod\":false,\"gcp-auth\":false,\"gvisor\":false,\"headlamp\":false,\"helm-tiller\":false,\"inaccel\":false,\"ingress\":false,\"ingress-dns\":false,\"inspektor-gadget\":false,\"istio\":false,\"istio-provisioner\":false,\"kong\":false,\"kubeflow\":false,\"kubevirt\":false,\"logviewer\":false,\"metallb\":false,\"metrics-server\":false,\"nvidia-device-plugin\":false,\"nvidia-driver-installer\":false,\"nvidia-gpu-device-plugin\":fa
lse,\"olm\":false,\"pod-security-policy\":false,\"portainer\":false,\"registry\":false,\"registry-aliases\":false,\"registry-creds\":false,\"storage-provisioner\":false,\"storage-provisioner-gluster\":false,\"storage-provisioner-rancher\":false,\"volumesnapshots\":false,\"yakd\":false},\"CustomAddonImages\":null,\"CustomAddonRegistries\":null,\"VerifyComponents\":{\"apiserver\":true,\"apps_running\":true,\"default_sa\":true,\"extra\":true,\"kubelet\":true,\"node_ready\":true,\"system_pods\":true},\"StartHostTimeout\":360000000000,\"ScheduledStop\":null,\"ExposedPorts\":[],\"ListenAddress\":\"\",\"Network\":\"\",\"Subnet\":\"\",\"MultiNodeRequested\":true,\"ExtraDisks\":0,\"CertExpiration\":94608000000000000,\"Mount\":false,\"MountString\":\"/Users:/minikube-host\",\"Mount9PVersion\":\"9p2000.L\",\"MountGID\":\"docker\",\"MountIP\":\"\",\"MountMSize\":262144,\"MountOptions\":[],\"MountPort\":0,\"MountType\":\"9p\",\"MountUID\":\"docker\",\"BinaryMirror\":\"\",\"DisableOptimizations\":false,\"DisableMetrics\":f
alse,\"CustomQemuFirmwarePath\":\"\",\"SocketVMnetClientPath\":\"\",\"SocketVMnetPath\":\"\",\"StaticIP\":\"\",\"SSHAuthSock\":\"\",\"SSHAgentPID\":0,\"GPUs\":\"\",\"AutoPauseInterval\":60000000000},\"Active\":false,\"ActiveKubeContext\":true}]}"*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
E0429 12:43:51.674330   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:43:55.274019   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000: exit status 3 (1m15.098920591s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 12:44:01.517078   18212 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	E0429 12:44:01.517094   18212 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-431000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/HAppyAfterClusterStart (227.50s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (300.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status --output json -v=7 --alsologtostderr
E0429 12:45:14.725137   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
ha_test.go:326: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status --output json -v=7 --alsologtostderr: exit status 3 (3m45.166401122s)

                                                
                                                
-- stdout --
	[{"Name":"ha-431000","Host":"Error","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Configured","Worker":false},{"Name":"ha-431000-m02","Host":"Error","Kubelet":"Nonexistent","APIServer":"Nonexistent","Kubeconfig":"Configured","Worker":false},{"Name":"ha-431000-m03","Host":"Error","Kubelet":"Nonexistent","APIServer":"Irrelevant","Kubeconfig":"Irrelevant","Worker":true}]

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:44:01.581846   18242 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:44:01.582042   18242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:44:01.582047   18242 out.go:304] Setting ErrFile to fd 2...
	I0429 12:44:01.582051   18242 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:44:01.582248   18242 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:44:01.582427   18242 out.go:298] Setting JSON to true
	I0429 12:44:01.582451   18242 mustload.go:65] Loading cluster: ha-431000
	I0429 12:44:01.582489   18242 notify.go:220] Checking for updates...
	I0429 12:44:01.582787   18242 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:44:01.582802   18242 status.go:255] checking status of ha-431000 ...
	I0429 12:44:01.583176   18242 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:44:01.583236   18242 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:44:01.591915   18242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62280
	I0429 12:44:01.592247   18242 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:44:01.592621   18242 main.go:141] libmachine: Using API Version  1
	I0429 12:44:01.592631   18242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:44:01.592887   18242 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:44:01.593021   18242 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:44:01.593103   18242 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:44:01.593186   18242 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:44:01.594129   18242 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0429 12:44:01.594149   18242 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:44:01.594395   18242 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:44:01.594417   18242 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:44:01.603007   18242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62282
	I0429 12:44:01.603321   18242 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:44:01.603667   18242 main.go:141] libmachine: Using API Version  1
	I0429 12:44:01.603683   18242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:44:01.603877   18242 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:44:01.603983   18242 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:44:01.604055   18242 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:44:01.604285   18242 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:44:01.604309   18242 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:44:01.614488   18242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62284
	I0429 12:44:01.614830   18242 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:44:01.615149   18242 main.go:141] libmachine: Using API Version  1
	I0429 12:44:01.615166   18242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:44:01.615364   18242 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:44:01.615477   18242 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:44:01.615600   18242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:44:01.615622   18242 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:44:01.615699   18242 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:44:01.615790   18242 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:44:01.615881   18242 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:44:01.615961   18242 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	W0429 12:45:16.615606   18242 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.53:22: connect: operation timed out
	W0429 12:45:16.615721   18242 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	E0429 12:45:16.615748   18242 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	I0429 12:45:16.615765   18242 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:45:16.615788   18242 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	I0429 12:45:16.615801   18242 status.go:255] checking status of ha-431000-m02 ...
	I0429 12:45:16.616306   18242 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:45:16.616373   18242 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:45:16.625965   18242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62287
	I0429 12:45:16.626305   18242 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:45:16.626688   18242 main.go:141] libmachine: Using API Version  1
	I0429 12:45:16.626712   18242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:45:16.626928   18242 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:45:16.627051   18242 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0429 12:45:16.627131   18242 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:45:16.627226   18242 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:45:16.628170   18242 status.go:330] ha-431000-m02 host status = "Running" (err=<nil>)
	I0429 12:45:16.628179   18242 host.go:66] Checking if "ha-431000-m02" exists ...
	I0429 12:45:16.628413   18242 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:45:16.628442   18242 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:45:16.636937   18242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62289
	I0429 12:45:16.637270   18242 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:45:16.637597   18242 main.go:141] libmachine: Using API Version  1
	I0429 12:45:16.637614   18242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:45:16.637829   18242 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:45:16.637941   18242 main.go:141] libmachine: (ha-431000-m02) Calling .GetIP
	I0429 12:45:16.638030   18242 host.go:66] Checking if "ha-431000-m02" exists ...
	I0429 12:45:16.638292   18242 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:45:16.638322   18242 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:45:16.646780   18242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62291
	I0429 12:45:16.647105   18242 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:45:16.647433   18242 main.go:141] libmachine: Using API Version  1
	I0429 12:45:16.647447   18242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:45:16.647675   18242 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:45:16.647782   18242 main.go:141] libmachine: (ha-431000-m02) Calling .DriverName
	I0429 12:45:16.647908   18242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:45:16.647919   18242 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHHostname
	I0429 12:45:16.647998   18242 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHPort
	I0429 12:45:16.648082   18242 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHKeyPath
	I0429 12:45:16.648166   18242 main.go:141] libmachine: (ha-431000-m02) Calling .GetSSHUsername
	I0429 12:45:16.648236   18242 sshutil.go:53] new ssh client: &{IP:192.169.0.54 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m02/id_rsa Username:docker}
	W0429 12:46:31.648411   18242 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.54:22: connect: operation timed out
	W0429 12:46:31.648513   18242 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.54:22: connect: operation timed out
	E0429 12:46:31.648542   18242 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.54:22: connect: operation timed out
	I0429 12:46:31.648560   18242 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:46:31.648582   18242 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.54:22: connect: operation timed out
	I0429 12:46:31.648593   18242 status.go:255] checking status of ha-431000-m03 ...
	I0429 12:46:31.649163   18242 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:46:31.649217   18242 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:46:31.659668   18242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62294
	I0429 12:46:31.660170   18242 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:46:31.660585   18242 main.go:141] libmachine: Using API Version  1
	I0429 12:46:31.660625   18242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:46:31.660876   18242 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:46:31.660988   18242 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0429 12:46:31.661077   18242 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:46:31.661164   18242 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 18112
	I0429 12:46:31.662098   18242 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0429 12:46:31.662109   18242 host.go:66] Checking if "ha-431000-m03" exists ...
	I0429 12:46:31.662348   18242 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:46:31.662374   18242 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:46:31.670935   18242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62296
	I0429 12:46:31.671369   18242 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:46:31.671701   18242 main.go:141] libmachine: Using API Version  1
	I0429 12:46:31.671711   18242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:46:31.671924   18242 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:46:31.672017   18242 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0429 12:46:31.672108   18242 host.go:66] Checking if "ha-431000-m03" exists ...
	I0429 12:46:31.672349   18242 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:46:31.672373   18242 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:46:31.680917   18242 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62298
	I0429 12:46:31.681274   18242 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:46:31.681616   18242 main.go:141] libmachine: Using API Version  1
	I0429 12:46:31.681644   18242 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:46:31.681878   18242 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:46:31.682021   18242 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0429 12:46:31.682162   18242 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:46:31.682192   18242 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0429 12:46:31.682271   18242 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0429 12:46:31.682352   18242 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0429 12:46:31.682453   18242 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0429 12:46:31.682541   18242 sshutil.go:53] new ssh client: &{IP:192.169.0.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0429 12:47:46.732268   18242 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.55:22: connect: operation timed out
	W0429 12:47:46.732375   18242 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.55:22: connect: operation timed out
	E0429 12:47:46.732403   18242 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.55:22: connect: operation timed out
	I0429 12:47:46.732420   18242 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:47:46.732457   18242 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.55:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:328: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-431000 status --output json -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
E0429 12:48:51.723880   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:48:55.323542   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000: exit status 3 (1m15.098915882s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 12:49:01.831944   18315 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	E0429 12:49:01.831965   18315 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-431000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/CopyFile (300.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (308.45s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 node stop m02 -v=7 --alsologtostderr
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-431000 node stop m02 -v=7 --alsologtostderr: (1m23.204980784s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr
E0429 12:51:58.386333   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr: exit status 7 (2m30.150277707s)

                                                
                                                
-- stdout --
	ha-431000
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-431000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-431000-m03
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:50:25.102417   18359 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:50:25.102693   18359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:50:25.102699   18359 out.go:304] Setting ErrFile to fd 2...
	I0429 12:50:25.102703   18359 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:50:25.102892   18359 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:50:25.103066   18359 out.go:298] Setting JSON to false
	I0429 12:50:25.103088   18359 mustload.go:65] Loading cluster: ha-431000
	I0429 12:50:25.103127   18359 notify.go:220] Checking for updates...
	I0429 12:50:25.103408   18359 config.go:182] Loaded profile config "ha-431000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:50:25.103422   18359 status.go:255] checking status of ha-431000 ...
	I0429 12:50:25.103785   18359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:50:25.103832   18359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:50:25.112666   18359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62313
	I0429 12:50:25.113003   18359 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:50:25.113419   18359 main.go:141] libmachine: Using API Version  1
	I0429 12:50:25.113431   18359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:50:25.113667   18359 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:50:25.113772   18359 main.go:141] libmachine: (ha-431000) Calling .GetState
	I0429 12:50:25.113846   18359 main.go:141] libmachine: (ha-431000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:50:25.113930   18359 main.go:141] libmachine: (ha-431000) DBG | hyperkit pid from json: 17733
	I0429 12:50:25.114900   18359 status.go:330] ha-431000 host status = "Running" (err=<nil>)
	I0429 12:50:25.114915   18359 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:50:25.115186   18359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:50:25.115206   18359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:50:25.123510   18359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62315
	I0429 12:50:25.123821   18359 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:50:25.124125   18359 main.go:141] libmachine: Using API Version  1
	I0429 12:50:25.124161   18359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:50:25.124391   18359 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:50:25.124498   18359 main.go:141] libmachine: (ha-431000) Calling .GetIP
	I0429 12:50:25.124584   18359 host.go:66] Checking if "ha-431000" exists ...
	I0429 12:50:25.124826   18359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:50:25.124847   18359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:50:25.136705   18359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62317
	I0429 12:50:25.137022   18359 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:50:25.137351   18359 main.go:141] libmachine: Using API Version  1
	I0429 12:50:25.137377   18359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:50:25.137676   18359 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:50:25.137801   18359 main.go:141] libmachine: (ha-431000) Calling .DriverName
	I0429 12:50:25.137954   18359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:50:25.137974   18359 main.go:141] libmachine: (ha-431000) Calling .GetSSHHostname
	I0429 12:50:25.138056   18359 main.go:141] libmachine: (ha-431000) Calling .GetSSHPort
	I0429 12:50:25.138128   18359 main.go:141] libmachine: (ha-431000) Calling .GetSSHKeyPath
	I0429 12:50:25.138205   18359 main.go:141] libmachine: (ha-431000) Calling .GetSSHUsername
	I0429 12:50:25.138283   18359 sshutil.go:53] new ssh client: &{IP:192.169.0.53 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000/id_rsa Username:docker}
	W0429 12:51:40.138148   18359 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.53:22: connect: operation timed out
	W0429 12:51:40.138319   18359 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	E0429 12:51:40.138342   18359 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	I0429 12:51:40.138359   18359 status.go:257] ha-431000 status: &{Name:ha-431000 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:51:40.138379   18359 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	I0429 12:51:40.138390   18359 status.go:255] checking status of ha-431000-m02 ...
	I0429 12:51:40.138920   18359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:51:40.138964   18359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:51:40.149889   18359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62320
	I0429 12:51:40.150238   18359 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:51:40.150781   18359 main.go:141] libmachine: Using API Version  1
	I0429 12:51:40.150791   18359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:51:40.151143   18359 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:51:40.151259   18359 main.go:141] libmachine: (ha-431000-m02) Calling .GetState
	I0429 12:51:40.151343   18359 main.go:141] libmachine: (ha-431000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:51:40.151451   18359 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid from json: 17752
	I0429 12:51:40.152483   18359 main.go:141] libmachine: (ha-431000-m02) DBG | hyperkit pid 17752 missing from process table
	I0429 12:51:40.152512   18359 status.go:330] ha-431000-m02 host status = "Stopped" (err=<nil>)
	I0429 12:51:40.152522   18359 status.go:343] host is not running, skipping remaining checks
	I0429 12:51:40.152540   18359 status.go:257] ha-431000-m02 status: &{Name:ha-431000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 12:51:40.152552   18359 status.go:255] checking status of ha-431000-m03 ...
	I0429 12:51:40.152874   18359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:51:40.152895   18359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:51:40.161891   18359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62322
	I0429 12:51:40.162259   18359 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:51:40.162630   18359 main.go:141] libmachine: Using API Version  1
	I0429 12:51:40.162639   18359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:51:40.163016   18359 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:51:40.163270   18359 main.go:141] libmachine: (ha-431000-m03) Calling .GetState
	I0429 12:51:40.163395   18359 main.go:141] libmachine: (ha-431000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 12:51:40.163477   18359 main.go:141] libmachine: (ha-431000-m03) DBG | hyperkit pid from json: 18112
	I0429 12:51:40.164410   18359 status.go:330] ha-431000-m03 host status = "Running" (err=<nil>)
	I0429 12:51:40.164418   18359 host.go:66] Checking if "ha-431000-m03" exists ...
	I0429 12:51:40.164658   18359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:51:40.164682   18359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:51:40.173444   18359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62324
	I0429 12:51:40.173948   18359 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:51:40.174512   18359 main.go:141] libmachine: Using API Version  1
	I0429 12:51:40.174526   18359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:51:40.174785   18359 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:51:40.174955   18359 main.go:141] libmachine: (ha-431000-m03) Calling .GetIP
	I0429 12:51:40.175045   18359 host.go:66] Checking if "ha-431000-m03" exists ...
	I0429 12:51:40.175313   18359 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:51:40.175350   18359 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:51:40.184418   18359 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:62326
	I0429 12:51:40.184857   18359 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:51:40.185203   18359 main.go:141] libmachine: Using API Version  1
	I0429 12:51:40.185211   18359 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:51:40.185469   18359 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:51:40.185664   18359 main.go:141] libmachine: (ha-431000-m03) Calling .DriverName
	I0429 12:51:40.185858   18359 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 12:51:40.185869   18359 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHHostname
	I0429 12:51:40.186000   18359 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHPort
	I0429 12:51:40.186143   18359 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHKeyPath
	I0429 12:51:40.186305   18359 main.go:141] libmachine: (ha-431000-m03) Calling .GetSSHUsername
	I0429 12:51:40.186388   18359 sshutil.go:53] new ssh client: &{IP:192.169.0.55 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/ha-431000-m03/id_rsa Username:docker}
	W0429 12:52:55.186610   18359 sshutil.go:64] dial failure (will retry): dial tcp 192.169.0.55:22: connect: operation timed out
	W0429 12:52:55.186707   18359 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.169.0.55:22: connect: operation timed out
	E0429 12:52:55.186732   18359 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.55:22: connect: operation timed out
	I0429 12:52:55.186747   18359 status.go:257] ha-431000-m03 status: &{Name:ha-431000-m03 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0429 12:52:55.186770   18359 status.go:260] status error: NewSession: new client: new client: dial tcp 192.169.0.55:22: connect: operation timed out

                                                
                                                
** /stderr **
ha_test.go:375: status says not all three control-plane nodes are present: args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr": ha-431000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-431000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m03
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:378: status says not three hosts are running: args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr": ha-431000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-431000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m03
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:381: status says not three kubelets are running: args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr": ha-431000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-431000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m03
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
ha_test.go:384: status says not two apiservers are running: args "out/minikube-darwin-amd64 -p ha-431000 status -v=7 --alsologtostderr": ha-431000
type: Control Plane
host: Error
kubelet: Nonexistent
apiserver: Nonexistent
kubeconfig: Configured

                                                
                                                
ha-431000-m02
type: Control Plane
host: Stopped
kubelet: Stopped
apiserver: Stopped
kubeconfig: Stopped

                                                
                                                
ha-431000-m03
type: Worker
host: Error
kubelet: Nonexistent

                                                
                                                
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
E0429 12:53:51.722682   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:53:55.322729   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000: exit status 3 (1m15.098470746s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 12:54:10.285749   18439 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	E0429 12:54:10.285791   18439 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-431000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/StopSecondaryNode (308.45s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (132.78s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
ha_test.go:390: (dbg) Non-zero exit: out/minikube-darwin-amd64 profile list --output json: signal: killed (57.676193238s)
ha_test.go:392: failed to list profiles with json format. args "out/minikube-darwin-amd64 profile list --output json": signal: killed
ha_test.go:398: failed to decode json from profile list: args "out/minikube-darwin-amd64 profile list --output json": unexpected end of JSON input
ha_test.go:411: expected the json of 'profile list' to include "ha-431000" but got *""*. args: "out/minikube-darwin-amd64 profile list --output json"
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p ha-431000 -n ha-431000: exit status 3 (1m15.100912222s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0429 12:56:23.062904   18520 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out
	E0429 12:56:23.062926   18520 status.go:249] status error: NewSession: new client: new client: dial tcp 192.169.0.53:22: connect: operation timed out

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "ha-431000" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (132.78s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (7201.43s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0429 14:02:27.435475   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/kubenet-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:02:53.529940   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/no-preload-608000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:03:08.088478   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/auto-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:03:51.595221   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/old-k8s-version-687000/client.crt: no such file or directory
E0429 14:03:51.998806   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 14:03:55.597938   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:04:10.939505   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/calico-525000/client.crt: no such file or directory
E0429 14:04:30.186483   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/flannel-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:04:45.719257   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/kindnet-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:05:27.924082   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/custom-flannel-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:05:42.675520   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/false-525000/client.crt: no such file or directory
E0429 14:06:01.580136   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/enable-default-cni-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:06:26.730545   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:06:56.719663   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/bridge-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:07:27.445495   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/kubenet-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:07:53.539632   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/no-preload-608000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:08:08.096178   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/auto-525000/client.crt: no such file or directory
E0429 14:08:35.063146   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:08:51.602389   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/old-k8s-version-687000/client.crt: no such file or directory
E0429 14:08:52.007064   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 14:08:55.605721   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:09:10.949161   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/calico-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:09:45.726647   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/kindnet-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:10:14.655544   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/old-k8s-version-687000/client.crt: no such file or directory
E0429 14:10:27.933008   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/custom-flannel-525000/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.169.0.93:8444/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.169.0.93:8444: i/o timeout
E0429 14:10:42.683439   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/false-525000/client.crt: no such file or directory
panic: test timed out after 2h0m0s
running tests:
	TestNetworkPlugins (49m5s)
	TestNetworkPlugins/group (29m37s)
	TestStartStop (40m56s)
	TestStartStop/group/default-k8s-diff-port (19m36s)
	TestStartStop/group/default-k8s-diff-port/serial (19m36s)
	TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (8m42s)
	TestStartStop/group/newest-cni (13m41s)
	TestStartStop/group/newest-cni/serial (13m41s)
	TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (19s)

                                                
                                                
goroutine 4139 [running]:
testing.(*M).startAlarm.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2366 +0x385
created by time.goFunc
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/time/sleep.go:177 +0x2d

                                                
                                                
goroutine 1 [chan receive, 44 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0005c2820, 0xc001275bb0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
testing.runTests(0xc000116348, {0x805afc0, 0x2a, 0x2a}, {0x3bacaa5?, 0x56e2e19?, 0x807dd80?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2159 +0x445
testing.(*M).Run(0xc001f90780)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:2027 +0x68b
k8s.io/minikube/test/integration.TestMain(0xc001f90780)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/main_test.go:62 +0x8b
main.main()
	_testmain.go:131 +0x195

                                                
                                                
goroutine 10 [select]:
go.opencensus.io/stats/view.(*worker).start(0xc0004feb80)
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:292 +0x9f
created by go.opencensus.io/stats/view.init.0 in goroutine 1
	/var/lib/jenkins/go/pkg/mod/go.opencensus.io@v0.24.0/stats/view/worker.go:34 +0x8d

                                                
                                                
goroutine 2754 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002819320)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2753
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3010 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3009
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2626 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2625
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3971 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc002711f50, 0xc001febf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x0?, 0xc002711f50, 0xc002711f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002711fd0?, 0x3c66c04?, 0xc0026d0cc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3985
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 26 [select]:
k8s.io/klog/v2.(*flushDaemon).run.func1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1174 +0x117
created by k8s.io/klog/v2.(*flushDaemon).run in goroutine 25
	/var/lib/jenkins/go/pkg/mod/k8s.io/klog/v2@v2.120.1/klog.go:1170 +0x171

                                                
                                                
goroutine 157 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc001270750, 0xc001fa1f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x0?, 0xc001270750, 0xc001270798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x0?, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3970 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc002160cd0, 0x3)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020108a0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002160d00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0028b0440, {0x6cd5760, 0xc0029385a0}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0028b0440, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3985
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3097 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0021606d0, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0026d0ba0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002160700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002177c20, {0x6cd5760, 0xc0022a43c0}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002177c20, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3113
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3220 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3219
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 158 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 157
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 156 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc00201c6d0, 0x2d)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002010c60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00201c700)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc001279c50, {0x6cd5760, 0xc000a07290}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc001279c50, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 173
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3675 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002a90c10, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002e10ae0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002a90c40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0028b0e70, {0x6cd5760, 0xc002ccbf50}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0028b0e70, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3659
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2746 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2745
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3112 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0026d0cc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3093
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3445 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3444
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2625 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc00126e750, 0xc00126e798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x40?, 0xc00126e750, 0xc00126e798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0xc00126e7b0?, 0x4073858?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00126e7d0?, 0x3c66c04?, 0xc002d16b40?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2614
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1106 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc0025fe9a0, 0xc0025f07e0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1105
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2791 [chan receive, 37 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00201cb00, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2789
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 172 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002010f00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 173 [chan receive, 117 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00201c700, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 144
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4080 [chan receive]:
testing.(*T).Run(0xc002281a00, {0x56b4c9e?, 0x60400000004?}, 0xc0004fea80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc002281a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc002281a00, 0xc0028c2080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2532
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2976 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0027efe50, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00286ce40)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0027efe80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002d72d80, {0x6cd5760, 0xc000a071d0}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002d72d80, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2996
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3490 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0025c98c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3488
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3460 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002bdb0c0, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3439
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3009 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc002713f50, 0xc002713f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x40?, 0xc002713f50, 0xc002713f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0xc002713fb0?, 0x4073858?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002713fd0?, 0x3c66c04?, 0xc002f20840?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2996
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3317 [sync.Cond.Wait, 1 minutes]:
sync.runtime_notifyListWait(0xc0008c6cd0, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002980d80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0008c6e00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0021760e0, {0x6cd5760, 0xc002692060}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0021760e0, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3312
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 964 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc002161510, 0x2b)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0025b6540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002161540)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002179e80, {0x6cd5760, 0xc0020f5560}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002179e80, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 981
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2614 [chan receive, 38 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0022e0580, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2609
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2018 [chan receive, 50 minutes]:
testing.(*T).Run(0xc002133040, {0x56898e7?, 0xc26ddc04299?}, 0xc0023bf620)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestNetworkPlugins(0xc002133040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/net_test.go:52 +0xd4
testing.tRunner(0xc002133040, 0x6cc9558)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 966 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 965
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2779 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00201cad0, 0x18)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc00286dec0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00201cb00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009ea960, {0x6cd5760, 0xc002810030}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009ea960, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2791
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4137 [IO wait]:
internal/poll.runtime_pollWait(0x4f89b3a8, 0x77)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0028c2480?, 0x0?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitWrite(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:93
internal/poll.(*FD).WaitWrite(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:683
net.(*netFD).connect(0xc0028c2480, {0x6cf8fd8, 0xc002d67380}, {0xc002039168?, 0x3aeccfb?}, {0x6cd4538?, 0xc00283e1e0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:141 +0x70b
net.(*netFD).dial(0xc0028c2480, {0x6cf8fd8, 0xc002d67380}, {0x6cfe510?, 0x0?}, {0x6cfe510, 0xc002d67530}, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/sock_posix.go:124 +0x3bc
net.socket({0x6cf8fd8, 0xc002d67380}, {0x568803f, 0x3}, 0x2, 0x1, 0xc000010018?, 0x0, {0x6cfe510, 0x0}, ...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/sock_posix.go:70 +0x29b
net.internetSocket({0x6cf8fd8, 0xc002d67380}, {0x568803f, 0x3}, {0x6cfe510, 0x0}, {0x6cfe510, 0xc002d67530}, 0x1, 0x0, ...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/ipsock_posix.go:154 +0xf8
net.(*sysDialer).doDialTCPProto(0xc002c1c000, {0x6cf8fd8, 0xc002d67380}, 0x0, 0xc002d67530, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:85 +0xec
net.(*sysDialer).doDialTCP(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:75
net.(*sysDialer).dialTCP(0xc002d67530?, {0x6cf8fd8?, 0xc002d67380?}, 0x3b5393a?, 0xc002039400?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:71 +0x65
net.(*sysDialer).dialSingle(0xc002c1c000, {0x6cf8fd8, 0xc002d67380}, {0x6ce1bf8, 0xc002d67530})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/dial.go:651 +0x27d
net.(*sysDialer).dialSerial(0xc002c1c000, {0x6cf8fd8, 0xc002d67380}, {0xc00256a3c0?, 0x1, 0xc00256a3c0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/dial.go:616 +0x24e
net.(*sysDialer).dialParallel(0xc00256a3b0?, {0x6cf8fd8?, 0xc002d67380?}, {0xc00256a3c0?, 0xc002d674d0?, 0x5688d26?}, {0x0?, 0x568803f?, 0xc001fe7828?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/dial.go:517 +0x3b4
net.(*Dialer).DialContext(0xc000568460, {0x6cf8fd8, 0xc002d67380}, {0x568803f, 0x3}, {0xc002ace678, 0x11})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/dial.go:508 +0x69a
k8s.io/client-go/util/connrotation.(*Dialer).DialContext(0xc0028d0180, {0x6cf8fd8?, 0xc002d67380?}, {0x568803f?, 0x60?}, {0xc002ace678?, 0x3c0e2f1?})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/connrotation/connrotation.go:118 +0x43
net/http.(*Transport).dial(0x47c?, {0x6cf8fd8?, 0xc002d67380?}, {0x568803f?, 0x0?}, {0xc002ace678?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1187 +0xd2
net/http.(*Transport).dialConn(0xc000932000, {0x6cf8fd8, 0xc002d67380}, {{}, 0x0, {0xc000632460, 0x5}, {0xc002ace678, 0x11}, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1648 +0x7e8
net/http.(*Transport).dialConnFor(0xc000932000, 0xc000284e70)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1485 +0xcd
created by net/http.(*Transport).queueForDial in goroutine 4086
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1449 +0x3c9

                                                
                                                
goroutine 3443 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002bdb090, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002819440)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002bdb0c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002cd32b0, {0x6cd5760, 0xc0009bb350}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002cd32b0, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3460
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 741 [IO wait, 113 minutes]:
internal/poll.runtime_pollWait(0x4f89bf48, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc0004fe180?, 0x3fe?, 0x0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Accept(0xc0004fe180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:611 +0x2ac
net.(*netFD).accept(0xc0004fe180)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:172 +0x29
net.(*TCPListener).accept(0xc001274060)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock_posix.go:159 +0x1e
net.(*TCPListener).Accept(0xc001274060)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/tcpsock.go:327 +0x30
net/http.(*Server).Serve(0xc0009b40f0, {0x6cec0f0, 0xc001274060})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3255 +0x33e
net/http.(*Server).ListenAndServe(0xc0009b40f0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/server.go:3184 +0x71
k8s.io/minikube/test/integration.startHTTPProxy.func1(0xc0020c2b60?, 0xc0020c2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2209 +0x18
created by k8s.io/minikube/test/integration.startHTTPProxy in goroutine 738
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/functional_test.go:2208 +0x129

                                                
                                                
goroutine 2096 [sync.Cond.Wait, 5 minutes]:
sync.runtime_notifyListWait(0xc0021605d0, 0x1b)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc001ffaa80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002160600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc00068c020, {0x6cd5760, 0xc002688030}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc00068c020, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2075
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3319 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3318
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3509 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc00230af50, 0xc00230af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x20?, 0xc00230af50, 0xc00230af98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0x657461722d656b?, 0x4f435f54524f4853?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00230afd0?, 0x3c66c04?, 0xc00091b320?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3491
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1353 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002bb1a20, 0xc002d16540)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 845
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2097 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc000094f50, 0xc000094f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x60?, 0xc000094f50, 0xc000094f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0xc002f24000?, 0x3c20900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x3c66ba5?, 0xc0025ff080?, 0xc002ea9860?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2075
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2755 [chan receive, 37 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002aaea00, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2753
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3211 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0027efc80, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3206
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 1372 [select, 107 minutes]:
net/http.(*persistConn).writeLoop(0xc002c5b440)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2444 +0xf0
created by net/http.(*Transport).dialConn in goroutine 1395
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1800 +0x1585

                                                
                                                
goroutine 1300 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002c946e0, 0xc002bc4ba0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1299
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 2995 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc00286cfc0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2972
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3312 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0008c6e00, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3310
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2075 [chan receive, 50 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002160600, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2045
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2780 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc000187f50, 0xc000187f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0xe0?, 0xc000187f50, 0xc000187f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0xc000187fb0?, 0x4073858?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x407381b?, 0xc002bf0900?, 0xc001f9b700?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2791
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2110 [chan receive, 29 minutes]:
testing.(*testContext).waitParallel(0xc000691310)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1817 +0xac
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1665 +0x5e9
testing.tRunner(0xc002133a00, 0xc0023bf620)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2018
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2074 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc001ffac00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2045
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 965 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc002105f50, 0xc002105f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x0?, 0xc002105f50, 0xc002105f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0012717d0?, 0x40eab45?, 0xc0025b6660?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 981
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 980 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0025b6660)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 874
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3099 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3098
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 981 [chan receive, 107 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002161540, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 874
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2613 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002dbd020)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2609
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2119 [chan receive, 42 minutes]:
testing.(*T).Run(0xc0005c2b60, {0x56898e7?, 0x3c1ffd3?}, 0x6cc9700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop(0xc0005c2b60)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:46 +0x35
testing.tRunner(0xc0005c2b60, 0x6cc95a0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 1
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2744 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc002aae9d0, 0x18)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002819200)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc002aaea00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002cd3380, {0x6cd5760, 0xc00207fc20}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002cd3380, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2755
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 2996 [chan receive, 35 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0027efe80, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 2972
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3510 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3509
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2532 [chan receive, 13 minutes]:
testing.(*T).Run(0xc0020c3860, {0x568af19?, 0x0?}, 0xc0028c2080)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020c3860)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0020c3860, 0xc002aae340)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2530
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3219 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc00126d750, 0xc00126d798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0xa0?, 0xc00126d750, 0xc00126d798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0xc002280000?, 0x3c20900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x3c66ba5?, 0xc00447e420?, 0xc002bc57a0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3211
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 1371 [select, 107 minutes]:
net/http.(*persistConn).readLoop(0xc002c5b440)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:2261 +0xd3a
created by net/http.(*Transport).dialConn in goroutine 1395
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1799 +0x152f

                                                
                                                
goroutine 2781 [select]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2780
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 2592 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0022e0550, 0x18)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002dbcf00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0022e0580)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0022d4220, {0x6cd5760, 0xc00207e1e0}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0022d4220, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2614
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3098 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc002711f50, 0xc002074f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x0?, 0xc002711f50, 0xc002711f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002711fd0?, 0x40eab45?, 0xc0026d0cc0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3113
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2098 [select, 5 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 2097
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 1248 [chan send, 107 minutes]:
os/exec.(*Cmd).watchCtx(0xc002b37b80, 0xc002b5e240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:789 +0x3ff
created by os/exec.(*Cmd).Start in goroutine 1247
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 3444 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc002307f50, 0xc001faaf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0xc0?, 0xc002307f50, 0xc002307f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0xc000a041a0?, 0x3c20900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0x3c66ba5?, 0xc0028bb340?, 0xc002b6e0c0?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3460
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3113 [chan receive, 33 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002160700, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3093
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3218 [sync.Cond.Wait, 3 minutes]:
sync.runtime_notifyListWait(0xc0027efc50, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0020fe4e0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0027efc80)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc002d726f0, {0x6cd5760, 0xc002123cb0}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc002d726f0, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3211
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3491 [chan receive, 31 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc0027eff00, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3488
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 2533 [chan receive, 19 minutes]:
testing.(*T).Run(0xc0020c3a00, {0x568af19?, 0x0?}, 0xc0028c3200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1(0xc0020c3a00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:130 +0xad9
testing.tRunner(0xc0020c3a00, 0xc002aae380)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2530
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 2790 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002614060)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 2789
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3210 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020fe600)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3206
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 2745 [select]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc000188750, 0xc00210bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x60?, 0xc000188750, 0xc000188798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0xc002280d00?, 0x3c20900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0001887d0?, 0x3c66c04?, 0xc00091a660?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 2755
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 2530 [chan receive, 19 minutes]:
testing.tRunner.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1650 +0x4ab
testing.tRunner(0xc0020c3520, 0x6cc9700)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1695 +0x134
created by testing.(*T).Run in goroutine 2119
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3459 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002819560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3439
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3311 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002980ea0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3310
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3318 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc002309f50, 0xc0024e9f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x31?, 0xc002309f50, 0xc002309f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0x5f455341425f424f?, 0x7079483d454d414e?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc002309fd0?, 0x3c66c04?, 0x696d2f2f3a737074?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3312
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3508 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc0027efed0, 0x17)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc0025c9560)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc0027eff00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0009e9520, {0x6cd5760, 0xc00298bb00}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0009e9520, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3491
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 4153 [syscall]:
syscall.syscall6(0xc002811f80?, 0x1000000000010?, 0x10000000019?, 0x4f980238?, 0x90?, 0x89975b8?, 0x90?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sys_darwin.go:45 +0x98
syscall.wait4(0xc002033a50?, 0x3aed0a5?, 0x90?, 0x6c36140?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/zsyscall_darwin_amd64.go:44 +0x45
syscall.Wait4(0x3c1dc45?, 0xc002033a84, 0x0?, 0x0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/syscall/syscall_bsd.go:144 +0x25
os.(*Process).wait(0xc002cd58c0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec_unix.go:43 +0x6d
os.(*Process).Wait(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec.go:134
os/exec.(*Cmd).Wait(0xc002d99ce0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:897 +0x45
os/exec.(*Cmd).Run(0xc002d99ce0)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:607 +0x2d
k8s.io/minikube/test/integration.Run(0xc002280d00, 0xc002d99ce0)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:103 +0x1e5
k8s.io/minikube/test/integration.validateEnableAddonWhileActive({0x6cf9080, 0xc000928150}, 0xc002280d00, {0xc002acea68, 0x11}, {0x5694837, 0xa}, {0x3c1ffd3?, 0x3b77f2f?}, {0xc002e00100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:205 +0x1d5
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002280d00)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002280d00, 0xc0004fea80)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 4080
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3732 [chan receive, 27 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc00270af00, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3717
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 4155 [IO wait]:
internal/poll.runtime_pollWait(0x4f89bc60, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002e107e0?, 0xc002ea4800?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002e107e0, {0xc002ea4800, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0023d08f0, {0xc002ea4800?, 0xc0027125a8?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc0028119b0, {0x6cd4178, 0xc0008a0248})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x6cd42b8, 0xc0028119b0}, {0x6cd4178, 0xc0008a0248}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0xc002cb5020?, {0x6cd42b8, 0xc0028119b0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x801cf20?, {0x6cd42b8?, 0xc0028119b0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x6cd42b8, 0xc0028119b0}, {0x6cd4238, 0xc0023d08f0}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc002cb5080?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4153
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 4138 [select]:
net.(*netFD).connect.func2()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:118 +0x7a
created by net.(*netFD).connect in goroutine 4137
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/fd_unix.go:117 +0x37c

                                                
                                                
goroutine 3731 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002cb5080)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3717
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3721 [sync.Cond.Wait]:
sync.runtime_notifyListWait(0xc00270aed0, 0x16)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/sema.go:569 +0x159
sync.(*Cond).Wait(0x67c33a0?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/sync/cond.go:70 +0x85
k8s.io/client-go/util/workqueue.(*Type).Get(0xc002cb4f60)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/queue.go:200 +0x93
k8s.io/client-go/transport.(*dynamicClientCert).processNextWorkItem(0xc00270af00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:156 +0x47
k8s.io/client-go/transport.(*dynamicClientCert).runWorker(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:151
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x30?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:226 +0x33
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc0028d1510, {0x6cd5760, 0xc002938f00}, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:227 +0xaf
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0xc0028d1510, 0x3b9aca00, 0x0, 0x1, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:204 +0x7f
k8s.io/apimachinery/pkg/util/wait.Until(...)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/backoff.go:161
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3732
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:140 +0x1ef

                                                
                                                
goroutine 3723 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3722
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 4154 [IO wait]:
internal/poll.runtime_pollWait(0x4f997910, 0x72)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/runtime/netpoll.go:345 +0x85
internal/poll.(*pollDesc).wait(0xc002e10720?, 0xc002ea4600?, 0x1)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:84 +0x27
internal/poll.(*pollDesc).waitRead(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_poll_runtime.go:89
internal/poll.(*FD).Read(0xc002e10720, {0xc002ea4600, 0x200, 0x200})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/internal/poll/fd_unix.go:164 +0x27a
os.(*File).read(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file_posix.go:29
os.(*File).Read(0xc0023d08b8, {0xc002ea4600?, 0xc00018d678?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:118 +0x52
bytes.(*Buffer).ReadFrom(0xc002811980, {0x6cd4178, 0xc0008a0240})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/bytes/buffer.go:211 +0x98
io.copyBuffer({0x6cd42b8, 0xc002811980}, {0x6cd4178, 0xc0008a0240}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:415 +0x151
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os.genericWriteTo(0x7f8f860?, {0x6cd42b8, 0xc002811980})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:269 +0x58
os.(*File).WriteTo(0x801cf20?, {0x6cd42b8?, 0xc002811980?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/file.go:247 +0x49
io.copyBuffer({0x6cd42b8, 0xc002811980}, {0x6cd4238, 0xc0023d08b8}, {0x0, 0x0, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:411 +0x9d
io.Copy(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/io/io.go:388
os/exec.(*Cmd).writerDescriptor.func1()
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:577 +0x34
os/exec.(*Cmd).Start.func2(0xc0004fea80?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:724 +0x2c
created by os/exec.(*Cmd).Start in goroutine 4153
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:723 +0x9ab

                                                
                                                
goroutine 3677 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3676
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3658 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc002e10c00)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3654
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                                
goroutine 3676 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc00230bf50, 0xc001fe5f98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x20?, 0xc00230bf50, 0xc00230bf98)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0xc00230bfb0?, 0x4073858?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc00230bfd0?, 0x3c66c04?, 0xc000058a20?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3659
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 3659 [chan receive, 29 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002a90c40, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3654
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3722 [select, 3 minutes]:
k8s.io/apimachinery/pkg/util/wait.waitForWithContext({0x6cf9240, 0xc00091a3c0}, 0xc002714750, 0xc002714798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/wait.go:205 +0xd1
k8s.io/apimachinery/pkg/util/wait.poll({0x6cf9240, 0xc00091a3c0}, 0x60?, 0xc002714750, 0xc002714798)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:260 +0x89
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext({0x6cf9240?, 0xc00091a3c0?}, 0xc000a041a0?, 0x3c20900?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:200 +0x53
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntil(0xc0027147d0?, 0x3c66c04?, 0xc002f21560?)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:187 +0x3c
created by k8s.io/client-go/transport.(*dynamicClientCert).Run in goroutine 3732
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:142 +0x29a

                                                
                                                
goroutine 4156 [select]:
os/exec.(*Cmd).watchCtx(0xc002d99ce0, 0xc000058d20)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:764 +0xb5
created by os/exec.(*Cmd).Start in goroutine 4153
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/os/exec/exec.go:750 +0x973

                                                
                                                
goroutine 4086 [select]:
net/http.(*Transport).getConn(0xc000932000, 0xc00270a0c0, {{}, 0x0, {0xc000632460, 0x5}, {0xc002ace678, 0x11}, 0x0})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:1406 +0x5a5
net/http.(*Transport).roundTrip(0xc000932000, 0xc002114360)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/transport.go:595 +0x73a
net/http.(*Transport).RoundTrip(0x6aaf5e0?, 0xc002d67470?)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/roundtrip.go:17 +0x13
k8s.io/client-go/transport.(*userAgentRoundTripper).RoundTrip(0xc000160d80, 0xc002114240)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/round_trippers.go:168 +0x326
net/http.send(0xc002114240, {0x6cd9880, 0xc000160d80}, {0x3aecc01?, 0x2c?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/client.go:259 +0x5e4
net/http.(*Client).send(0xc002ee7860, 0xc002114240, {0x0?, 0xc002114240?, 0x0?})
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/client.go:180 +0x98
net/http.(*Client).do(0xc002ee7860, 0xc002114240)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/client.go:724 +0x8dc
net/http.(*Client).Do(...)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/net/http/client.go:590
k8s.io/client-go/rest.(*Request).request(0xc002114000, {0x6cf9080, 0xc0005685b0}, 0xc00006ee20)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/rest/request.go:1023 +0x397
k8s.io/client-go/rest.(*Request).Do(0xc002114000, {0x6cf9080, 0xc0005685b0})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/rest/request.go:1063 +0xc5
k8s.io/client-go/kubernetes/typed/core/v1.(*pods).List(0xc0009f8060, {0x6cf9080, 0xc0005685b0}, {{{0x0, 0x0}, {0x0, 0x0}}, {0x56c662a, 0x1c}, {0x0, ...}, ...})
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/kubernetes/typed/core/v1/pod.go:99 +0x165
k8s.io/minikube/test/integration.PodWait.func1({0x6cf9080, 0xc0005685b0})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:327 +0x10b
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext.func2(0xc00006f9d0?, {0x6cf9080?, 0xc0005685b0?})
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:87 +0x52
k8s.io/apimachinery/pkg/util/wait.loopConditionUntilContext({0x6cf9080, 0xc0005685b0}, {0x6cec7b0, 0xc000161b00}, 0x1, 0x0, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/loop.go:88 +0x24d
k8s.io/apimachinery/pkg/util/wait.PollUntilContextTimeout({0x6cf9080?, 0xc00048ef50?}, 0x3b9aca00, 0xc00006fe10?, 0x1, 0xc00006fc18)
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:48 +0xa5
k8s.io/minikube/test/integration.PodWait({0x6cf9080, 0xc00048ef50}, 0xc002280000, {0xc0021fe120, 0x1c}, {0x56af005, 0x14}, {0x56c662a, 0x1c}, 0x7dba821800)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/helpers_test.go:371 +0x385
k8s.io/minikube/test/integration.validateAppExistsAfterStop({0x6cf9080, 0xc00048ef50}, 0xc002280000, {0xc0021fe120, 0x1c}, {0x56b1e48?, 0xc000095f60?}, {0x3c1ffd3?, 0x3b77f2f?}, {0xc002e01100, ...})
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:274 +0x145
k8s.io/minikube/test/integration.TestStartStop.func1.1.1.1(0xc002280000)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:156 +0x66
testing.tRunner(0xc002280000, 0xc0028c2100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 3907
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3907 [chan receive, 9 minutes]:
testing.(*T).Run(0xc002281040, {0x56b4cb4?, 0x60400000004?}, 0xc0028c2100)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1750 +0x3ab
k8s.io/minikube/test/integration.TestStartStop.func1.1.1(0xc002281040)
	/mnt/disks/sdb/jenkins/go/src/k8s.io/minikube/test/integration/start_stop_delete_test.go:155 +0x2af
testing.tRunner(0xc002281040, 0xc0028c3200)
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1689 +0xfb
created by testing.(*T).Run in goroutine 2533
	/var/lib/jenkins/go/pkg/mod/golang.org/toolchain@v0.0.1-go1.22.2.linux-amd64/src/testing/testing.go:1742 +0x390

                                                
                                                
goroutine 3972 [select, 1 minutes]:
k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1.1()
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:297 +0x1b8
created by k8s.io/apimachinery/pkg/util/wait.PollImmediateUntilWithContext.poller.func1 in goroutine 3971
	/var/lib/jenkins/go/pkg/mod/k8s.io/apimachinery@v0.30.0/pkg/util/wait/poll.go:280 +0xbb

                                                
                                                
goroutine 3985 [chan receive, 17 minutes]:
k8s.io/client-go/transport.(*dynamicClientCert).Run(0xc002160d00, 0xc00091a3c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cert_rotation.go:147 +0x2a9
created by k8s.io/client-go/transport.(*tlsTransportCache).get in goroutine 3966
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/transport/cache.go:122 +0x585

                                                
                                                
goroutine 3952 [select]:
k8s.io/client-go/util/workqueue.(*delayingType).waitingLoop(0xc0020109c0)
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:276 +0x2ff
created by k8s.io/client-go/util/workqueue.newDelayingQueue in goroutine 3966
	/var/lib/jenkins/go/pkg/mod/k8s.io/client-go@v0.30.0/util/workqueue/delaying_queue.go:113 +0x205

                                                
                                    

Test pass (195/223)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 25.82
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.3
9 TestDownloadOnly/v1.20.0/DeleteAll 0.39
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.37
12 TestDownloadOnly/v1.30.0/json-events 10.5
13 TestDownloadOnly/v1.30.0/preload-exists 0
16 TestDownloadOnly/v1.30.0/kubectl 0
17 TestDownloadOnly/v1.30.0/LogsDuration 0.31
18 TestDownloadOnly/v1.30.0/DeleteAll 0.39
19 TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds 0.37
21 TestBinaryMirror 0.99
22 TestOffline 174.99
25 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.2
26 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.22
27 TestAddons/Setup 147.23
29 TestAddons/parallel/Registry 14.68
30 TestAddons/parallel/Ingress 19.39
31 TestAddons/parallel/InspektorGadget 10.57
32 TestAddons/parallel/MetricsServer 5.63
33 TestAddons/parallel/HelmTiller 9.95
35 TestAddons/parallel/CSI 79.45
36 TestAddons/parallel/Headlamp 15
37 TestAddons/parallel/CloudSpanner 5.4
38 TestAddons/parallel/LocalPath 9.83
39 TestAddons/parallel/NvidiaDevicePlugin 5.34
40 TestAddons/parallel/Yakd 5.01
43 TestAddons/serial/GCPAuth/Namespaces 0.1
44 TestAddons/StoppedEnableDisable 5.95
46 TestCertExpiration 246.15
47 TestDockerFlags 38.96
48 TestForceSystemdFlag 38.84
49 TestForceSystemdEnv 40.06
52 TestHyperKitDriverInstallOrUpdate 7.8
55 TestErrorSpam/setup 36.8
56 TestErrorSpam/start 1.63
57 TestErrorSpam/status 0.52
58 TestErrorSpam/pause 1.36
59 TestErrorSpam/unpause 1.36
60 TestErrorSpam/stop 155.82
63 TestFunctional/serial/CopySyncFile 0
64 TestFunctional/serial/StartWithProxy 173
65 TestFunctional/serial/AuditLog 0
66 TestFunctional/serial/SoftStart 41.33
67 TestFunctional/serial/KubeContext 0.04
68 TestFunctional/serial/KubectlGetPods 0.06
71 TestFunctional/serial/CacheCmd/cache/add_remote 3.04
72 TestFunctional/serial/CacheCmd/cache/add_local 1.44
73 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.09
74 TestFunctional/serial/CacheCmd/cache/list 0.09
75 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.18
76 TestFunctional/serial/CacheCmd/cache/cache_reload 1.08
77 TestFunctional/serial/CacheCmd/cache/delete 0.18
78 TestFunctional/serial/MinikubeKubectlCmd 0.96
79 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.37
80 TestFunctional/serial/ExtraConfig 42.18
81 TestFunctional/serial/ComponentHealth 0.05
82 TestFunctional/serial/LogsCmd 2.79
83 TestFunctional/serial/LogsFileCmd 2.69
84 TestFunctional/serial/InvalidService 4.24
86 TestFunctional/parallel/ConfigCmd 0.55
87 TestFunctional/parallel/DashboardCmd 13.71
88 TestFunctional/parallel/DryRun 0.94
89 TestFunctional/parallel/InternationalLanguage 0.52
90 TestFunctional/parallel/StatusCmd 0.53
94 TestFunctional/parallel/ServiceCmdConnect 6.59
95 TestFunctional/parallel/AddonsCmd 0.27
96 TestFunctional/parallel/PersistentVolumeClaim 28.66
98 TestFunctional/parallel/SSHCmd 0.32
99 TestFunctional/parallel/CpCmd 1.2
100 TestFunctional/parallel/MySQL 26.31
101 TestFunctional/parallel/FileSync 0.25
102 TestFunctional/parallel/CertSync 1.39
106 TestFunctional/parallel/NodeLabels 0.05
108 TestFunctional/parallel/NonActiveRuntimeDisabled 0.16
110 TestFunctional/parallel/License 0.65
111 TestFunctional/parallel/Version/short 0.16
112 TestFunctional/parallel/Version/components 0.4
113 TestFunctional/parallel/ImageCommands/ImageListShort 0.17
114 TestFunctional/parallel/ImageCommands/ImageListTable 0.2
115 TestFunctional/parallel/ImageCommands/ImageListJson 0.16
116 TestFunctional/parallel/ImageCommands/ImageListYaml 0.17
117 TestFunctional/parallel/ImageCommands/ImageBuild 2.1
118 TestFunctional/parallel/ImageCommands/Setup 2.4
119 TestFunctional/parallel/DockerEnv/bash 0.8
120 TestFunctional/parallel/UpdateContextCmd/no_changes 0.2
121 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
122 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.2
123 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 3.34
124 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 2.16
125 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 5.18
126 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.17
127 TestFunctional/parallel/ImageCommands/ImageRemove 0.44
128 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.5
129 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.23
130 TestFunctional/parallel/ServiceCmd/DeployApp 13.15
132 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.37
133 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
135 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 11.14
136 TestFunctional/parallel/ServiceCmd/List 0.39
137 TestFunctional/parallel/ServiceCmd/JSONOutput 0.38
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.26
139 TestFunctional/parallel/ServiceCmd/Format 0.26
140 TestFunctional/parallel/ServiceCmd/URL 0.28
141 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
142 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
143 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.02
144 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.02
145 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
146 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.13
147 TestFunctional/parallel/ProfileCmd/profile_not_create 0.33
148 TestFunctional/parallel/ProfileCmd/profile_list 0.3
149 TestFunctional/parallel/ProfileCmd/profile_json_output 0.31
150 TestFunctional/parallel/MountCmd/any-port 6.03
151 TestFunctional/parallel/MountCmd/specific-port 1.35
152 TestFunctional/parallel/MountCmd/VerifyCleanup 1.58
153 TestFunctional/delete_addon-resizer_images 0.12
154 TestFunctional/delete_my-image_image 0.05
155 TestFunctional/delete_minikube_cached_images 0.05
163 TestMultiControlPlane/serial/NodeLabels 0.05
170 TestImageBuild/serial/Setup 39.53
171 TestImageBuild/serial/NormalBuild 1.31
172 TestImageBuild/serial/BuildWithBuildArg 0.51
173 TestImageBuild/serial/BuildWithDockerIgnore 0.24
174 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.22
178 TestJSONOutput/start/Command 93.1
179 TestJSONOutput/start/Audit 0
181 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
182 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
184 TestJSONOutput/pause/Command 0.47
185 TestJSONOutput/pause/Audit 0
187 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
188 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
190 TestJSONOutput/unpause/Command 0.47
191 TestJSONOutput/unpause/Audit 0
193 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
194 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
196 TestJSONOutput/stop/Command 8.36
197 TestJSONOutput/stop/Audit 0
199 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
200 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
201 TestErrorJSONOutput 0.77
206 TestMainNoArgs 0.09
207 TestMinikubeProfile 91.24
210 TestMountStart/serial/StartWithMountFirst 21.05
211 TestMountStart/serial/VerifyMountFirst 0.31
212 TestMountStart/serial/StartWithMountSecond 21.23
213 TestMountStart/serial/VerifyMountSecond 0.31
214 TestMountStart/serial/DeleteFirst 2.39
215 TestMountStart/serial/VerifyMountPostDelete 0.31
216 TestMountStart/serial/Stop 2.39
217 TestMountStart/serial/RestartStopped 18.34
218 TestMountStart/serial/VerifyMountPostStop 0.32
221 TestMultiNode/serial/FreshStart2Nodes 95.45
222 TestMultiNode/serial/DeployApp2Nodes 4.23
223 TestMultiNode/serial/PingHostFrom2Pods 0.92
224 TestMultiNode/serial/AddNode 35.26
225 TestMultiNode/serial/MultiNodeLabels 0.05
226 TestMultiNode/serial/ProfileList 0.21
227 TestMultiNode/serial/CopyFile 5.46
228 TestMultiNode/serial/StopNode 2.85
229 TestMultiNode/serial/StartAfterStop 26.71
230 TestMultiNode/serial/RestartKeepsNodes 146.49
231 TestMultiNode/serial/DeleteNode 3.39
232 TestMultiNode/serial/StopMultiNode 16.82
233 TestMultiNode/serial/RestartMultiNode 214.01
234 TestMultiNode/serial/ValidateNameConflict 47.92
238 TestPreload 170.48
240 TestScheduledStopUnix 226.4
241 TestSkaffold 114.96
244 TestRunningBinaryUpgrade 110.54
246 TestKubernetesUpgrade 121.46
259 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 3.01
260 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 6.28
261 TestStoppedBinaryUpgrade/Setup 1.4
262 TestStoppedBinaryUpgrade/Upgrade 87.75
263 TestStoppedBinaryUpgrade/MinikubeLogs 3.08
265 TestPause/serial/Start 171.95
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.51
275 TestNoKubernetes/serial/StartWithK8s 51.4
276 TestNoKubernetes/serial/StartWithStopK8s 8.56
277 TestNoKubernetes/serial/Start 20.22
278 TestNoKubernetes/serial/VerifyK8sNotRunning 0.14
279 TestNoKubernetes/serial/ProfileList 0.52
280 TestNoKubernetes/serial/Stop 2.37
281 TestNoKubernetes/serial/StartNoArgs 19.36
282 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.14
284 TestPause/serial/SecondStartNoReconfiguration 38.7
287 TestPause/serial/Pause 0.56
288 TestPause/serial/VerifyStatus 0.17
289 TestPause/serial/Unpause 0.54
290 TestPause/serial/PauseAgain 0.62
294 TestPause/serial/DeletePaused 5.29
295 TestPause/serial/VerifyDeletedResources 0.25
x
+
TestDownloadOnly/v1.20.0/json-events (25.82s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-793000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-793000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (25.824173863s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (25.82s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.3s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-793000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-793000: exit status 85 (299.983502ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-793000 | jenkins | v1.33.0 | 29 Apr 24 12:10 PDT |          |
	|         | -p download-only-793000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:10:47
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:10:47.661966   16303 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:10:47.662165   16303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:10:47.662170   16303 out.go:304] Setting ErrFile to fd 2...
	I0429 12:10:47.662174   16303 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:10:47.662347   16303 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	W0429 12:10:47.662446   16303 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/18774-15855/.minikube/config/config.json: open /Users/jenkins/minikube-integration/18774-15855/.minikube/config/config.json: no such file or directory
	I0429 12:10:47.664136   16303 out.go:298] Setting JSON to true
	I0429 12:10:47.686489   16303 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9617,"bootTime":1714408230,"procs":430,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 12:10:47.686574   16303 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:10:47.708385   16303 out.go:97] [download-only-793000] minikube v1.33.0 on Darwin 14.4.1
	I0429 12:10:47.730111   16303 out.go:169] MINIKUBE_LOCATION=18774
	W0429 12:10:47.708623   16303 preload.go:294] Failed to list preload files: open /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball: no such file or directory
	I0429 12:10:47.708640   16303 notify.go:220] Checking for updates...
	I0429 12:10:47.775122   16303 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:10:47.796026   16303 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 12:10:47.817190   16303 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:10:47.838420   16303 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	W0429 12:10:47.881027   16303 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 12:10:47.881569   16303 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:10:47.912221   16303 out.go:97] Using the hyperkit driver based on user configuration
	I0429 12:10:47.912275   16303 start.go:297] selected driver: hyperkit
	I0429 12:10:47.912296   16303 start.go:901] validating driver "hyperkit" against <nil>
	I0429 12:10:47.912486   16303 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:10:47.912714   16303 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18774-15855/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0429 12:10:48.148163   16303 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0429 12:10:48.153281   16303 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:10:48.153308   16303 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0429 12:10:48.153339   16303 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:10:48.156689   16303 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0429 12:10:48.156838   16303 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 12:10:48.156896   16303 cni.go:84] Creating CNI manager for ""
	I0429 12:10:48.156930   16303 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0429 12:10:48.157010   16303 start.go:340] cluster config:
	{Name:download-only-793000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-793000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:10:48.157275   16303 iso.go:125] acquiring lock: {Name:mk81eeca500d1c63233c315450a8b707db3bb810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:10:48.178606   16303 out.go:97] Downloading VM boot image ...
	I0429 12:10:48.178741   16303 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/iso/amd64/minikube-v1.33.0-1713736271-18706-amd64.iso
	I0429 12:10:57.269538   16303 out.go:97] Starting "download-only-793000" primary control-plane node in "download-only-793000" cluster
	I0429 12:10:57.269575   16303 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 12:10:57.327184   16303 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 12:10:57.327233   16303 cache.go:56] Caching tarball of preloaded images
	I0429 12:10:57.327679   16303 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 12:10:57.348266   16303 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0429 12:10:57.348341   16303 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 12:10:57.442959   16303 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0429 12:11:06.661899   16303 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 12:11:06.662094   16303 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 12:11:07.210558   16303 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on docker
	I0429 12:11:07.210780   16303 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/download-only-793000/config.json ...
	I0429 12:11:07.210802   16303 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/download-only-793000/config.json: {Name:mkd7329d518edb61cb89f0be65a722bcc4d1188a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:11:07.211143   16303 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0429 12:11:07.211528   16303 download.go:107] Downloading: https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.20.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/darwin/amd64/v1.20.0/kubectl
	
	
	* The control-plane node download-only-793000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-793000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.30s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-793000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/json-events (10.5s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-373000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-373000 --force --alsologtostderr --kubernetes-version=v1.30.0 --container-runtime=docker --driver=hyperkit : (10.502860621s)
--- PASS: TestDownloadOnly/v1.30.0/json-events (10.50s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/preload-exists
--- PASS: TestDownloadOnly/v1.30.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/kubectl
--- PASS: TestDownloadOnly/v1.30.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-373000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-373000: exit status 85 (309.132816ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-793000 | jenkins | v1.33.0 | 29 Apr 24 12:10 PDT |                     |
	|         | -p download-only-793000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.0 | 29 Apr 24 12:11 PDT | 29 Apr 24 12:11 PDT |
	| delete  | -p download-only-793000        | download-only-793000 | jenkins | v1.33.0 | 29 Apr 24 12:11 PDT | 29 Apr 24 12:11 PDT |
	| start   | -o=json --download-only        | download-only-373000 | jenkins | v1.33.0 | 29 Apr 24 12:11 PDT |                     |
	|         | -p download-only-373000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/04/29 12:11:14
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.1 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0429 12:11:14.546891   16350 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:11:14.547165   16350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:11:14.547170   16350 out.go:304] Setting ErrFile to fd 2...
	I0429 12:11:14.547173   16350 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:11:14.547345   16350 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:11:14.548753   16350 out.go:298] Setting JSON to true
	I0429 12:11:14.571318   16350 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":9644,"bootTime":1714408230,"procs":435,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 12:11:14.571428   16350 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:11:14.594069   16350 out.go:97] [download-only-373000] minikube v1.33.0 on Darwin 14.4.1
	I0429 12:11:14.615648   16350 out.go:169] MINIKUBE_LOCATION=18774
	I0429 12:11:14.594260   16350 notify.go:220] Checking for updates...
	I0429 12:11:14.657532   16350 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:11:14.678782   16350 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 12:11:14.699859   16350 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:11:14.720939   16350 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	W0429 12:11:14.762561   16350 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0429 12:11:14.763105   16350 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:11:14.793724   16350 out.go:97] Using the hyperkit driver based on user configuration
	I0429 12:11:14.793839   16350 start.go:297] selected driver: hyperkit
	I0429 12:11:14.793860   16350 start.go:901] validating driver "hyperkit" against <nil>
	I0429 12:11:14.794057   16350 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:11:14.794330   16350 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/18774-15855/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0429 12:11:14.804360   16350 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.0
	I0429 12:11:14.808112   16350 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:11:14.808133   16350 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0429 12:11:14.808161   16350 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0429 12:11:14.810784   16350 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0429 12:11:14.810929   16350 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0429 12:11:14.810978   16350 cni.go:84] Creating CNI manager for ""
	I0429 12:11:14.810993   16350 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0429 12:11:14.811004   16350 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0429 12:11:14.811069   16350 start.go:340] cluster config:
	{Name:download-only-373000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.0 ClusterName:download-only-373000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:11:14.811160   16350 iso.go:125] acquiring lock: {Name:mk81eeca500d1c63233c315450a8b707db3bb810 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0429 12:11:14.831804   16350 out.go:97] Starting "download-only-373000" primary control-plane node in "download-only-373000" cluster
	I0429 12:11:14.831839   16350 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:11:14.888221   16350 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 12:11:14.888268   16350 cache.go:56] Caching tarball of preloaded images
	I0429 12:11:14.888749   16350 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:11:14.910555   16350 out.go:97] Downloading Kubernetes v1.30.0 preload ...
	I0429 12:11:14.910607   16350 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 12:11:14.999125   16350 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.0/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4?checksum=md5:00b6acf85a82438f3897c0a6fafdcee7 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4
	I0429 12:11:20.345875   16350 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 12:11:20.346097   16350 preload.go:255] verifying checksum of /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.0-docker-overlay2-amd64.tar.lz4 ...
	I0429 12:11:20.828894   16350 cache.go:59] Finished verifying existence of preloaded tar for v1.30.0 on docker
	I0429 12:11:20.829256   16350 profile.go:143] Saving config to /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/download-only-373000/config.json ...
	I0429 12:11:20.829278   16350 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/download-only-373000/config.json: {Name:mk422a16ab671f7573568ca1284d616489d0c7ec Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0429 12:11:20.829658   16350 preload.go:132] Checking if preload exists for k8s version v1.30.0 and runtime docker
	I0429 12:11:20.830934   16350 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/18774-15855/.minikube/cache/darwin/amd64/v1.30.0/kubectl
	
	
	* The control-plane node download-only-373000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-373000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.0/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAll (0.39s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.0/DeleteAll (0.39s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-373000
--- PASS: TestDownloadOnly/v1.30.0/DeleteAlwaysSucceeds (0.37s)

                                                
                                    
x
+
TestBinaryMirror (0.99s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-519000 --alsologtostderr --binary-mirror http://127.0.0.1:60615 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-519000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-519000
--- PASS: TestBinaryMirror (0.99s)

                                                
                                    
x
+
TestOffline (174.99s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-818000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-818000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (2m49.693249093s)
helpers_test.go:175: Cleaning up "offline-docker-818000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-818000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-818000: (5.29432132s)
--- PASS: TestOffline (174.99s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.2s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-531000
addons_test.go:928: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-531000: exit status 85 (197.550543ms)

                                                
                                                
-- stdout --
	* Profile "addons-531000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-531000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.20s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-531000
addons_test.go:939: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-531000: exit status 85 (217.883215ms)

                                                
                                                
-- stdout --
	* Profile "addons-531000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-531000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.22s)

                                                
                                    
x
+
TestAddons/Setup (147.23s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-531000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-darwin-amd64 start -p addons-531000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m27.23019935s)
--- PASS: TestAddons/Setup (147.23s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.68s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 11.154259ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-ds526" [bf0976db-70ad-41f5-bf00-428228b42c10] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.005194371s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-cqhsl" [109f5a5b-53ce-4a0e-8ed5-7e5a3b43b5f8] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004117998s
addons_test.go:340: (dbg) Run:  kubectl --context addons-531000 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-531000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-531000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.987234913s)
addons_test.go:359: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 ip
addons_test.go:388: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.68s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.39s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-531000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-531000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-531000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [3bc9053c-d5ad-4f81-b06d-ebf3a196ec08] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [3bc9053c-d5ad-4f81-b06d-ebf3a196ec08] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.006303592s
addons_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:286: (dbg) Run:  kubectl --context addons-531000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.169.0.50
addons_test.go:306: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-darwin-amd64 -p addons-531000 addons disable ingress --alsologtostderr -v=1: (7.486239077s)
--- PASS: TestAddons/parallel/Ingress (19.39s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.57s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-wpbjs" [abfc7153-a521-4436-bcd1-5670c4f43d15] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.002420495s
addons_test.go:841: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-531000
addons_test.go:841: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-531000: (5.567759905s)
--- PASS: TestAddons/parallel/InspektorGadget (10.57s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.63s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 1.70513ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-44hkg" [c1a58405-583f-4d1f-b33b-3dd4e9906f25] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.004256501s
addons_test.go:415: (dbg) Run:  kubectl --context addons-531000 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.63s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.95s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 1.632897ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-zswct" [97fc514f-7e16-4efd-82d6-3ca380220068] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005725876s
addons_test.go:473: (dbg) Run:  kubectl --context addons-531000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-531000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.555915833s)
addons_test.go:490: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.95s)

                                                
                                    
x
+
TestAddons/parallel/CSI (79.45s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 12.710027ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-531000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
2024/04/29 12:14:09 [DEBUG] GET http://192.169.0.50:5000
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-531000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [353fb33e-e748-48ee-aba6-f866436f6448] Pending
helpers_test.go:344: "task-pv-pod" [353fb33e-e748-48ee-aba6-f866436f6448] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [353fb33e-e748-48ee-aba6-f866436f6448] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 14.002672033s
addons_test.go:584: (dbg) Run:  kubectl --context addons-531000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-531000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-531000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-531000 delete pod task-pv-pod
addons_test.go:594: (dbg) Done: kubectl --context addons-531000 delete pod task-pv-pod: (1.17345433s)
addons_test.go:600: (dbg) Run:  kubectl --context addons-531000 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-531000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-531000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [9c722470-ceea-4383-a28e-3893df670b11] Pending
helpers_test.go:344: "task-pv-pod-restore" [9c722470-ceea-4383-a28e-3893df670b11] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [9c722470-ceea-4383-a28e-3893df670b11] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 9.004818391s
addons_test.go:626: (dbg) Run:  kubectl --context addons-531000 delete pod task-pv-pod-restore
addons_test.go:626: (dbg) Done: kubectl --context addons-531000 delete pod task-pv-pod-restore: (1.236357099s)
addons_test.go:630: (dbg) Run:  kubectl --context addons-531000 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-531000 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-darwin-amd64 -p addons-531000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.436679839s)
addons_test.go:642: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (79.45s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-531000 --alsologtostderr -v=1
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7559bf459f-v8r8w" [7687507b-2f67-46e8-b7ec-1e6d29ebff1d] Pending
helpers_test.go:344: "headlamp-7559bf459f-v8r8w" [7687507b-2f67-46e8-b7ec-1e6d29ebff1d] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7559bf459f-v8r8w" [7687507b-2f67-46e8-b7ec-1e6d29ebff1d] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004419471s
--- PASS: TestAddons/parallel/Headlamp (15.00s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.4s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6dc8d859f6-tgwwx" [996a60ac-8705-4ce9-96a1-1e9dc5918485] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002247269s
addons_test.go:860: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-531000
--- PASS: TestAddons/parallel/CloudSpanner (5.40s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (9.83s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-531000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-531000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-531000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [89088e50-cc7d-41e2-9eca-9a91adc1a21c] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [89088e50-cc7d-41e2-9eca-9a91adc1a21c] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [89088e50-cc7d-41e2-9eca-9a91adc1a21c] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 4.005418134s
addons_test.go:891: (dbg) Run:  kubectl --context addons-531000 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 ssh "cat /opt/local-path-provisioner/pvc-59f3125e-09a0-470c-b619-ac676067f006_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-531000 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-531000 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-darwin-amd64 -p addons-531000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
--- PASS: TestAddons/parallel/LocalPath (9.83s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-ljx6g" [e53f3097-81aa-4457-a059-f4aae7ff9116] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.005342617s
addons_test.go:955: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-531000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.34s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-5ddbf7d777-z92l2" [e497c092-4d77-48fa-a910-67f92c990be0] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.008498283s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.1s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-531000 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-531000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.10s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.95s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-531000
addons_test.go:172: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-531000: (5.389678463s)
addons_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-531000
addons_test.go:180: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-531000
addons_test.go:185: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-531000
--- PASS: TestAddons/StoppedEnableDisable (5.95s)

                                                
                                    
x
+
TestCertExpiration (246.15s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-660000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-660000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (36.002610881s)
E0429 13:25:18.452963   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-660000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
cert_options_test.go:131: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-660000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : (24.837760314s)
helpers_test.go:175: Cleaning up "cert-expiration-660000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-660000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-660000: (5.303111459s)
--- PASS: TestCertExpiration (246.15s)

                                                
                                    
x
+
TestDockerFlags (38.96s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-916000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0429 13:23:51.789256   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 13:23:55.389423   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-916000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (35.139366648s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-916000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-916000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-916000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-916000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-916000: (3.483805891s)
--- PASS: TestDockerFlags (38.96s)

                                                
                                    
x
+
TestForceSystemdFlag (38.84s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-614000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-614000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (35.254383607s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-614000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-614000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-614000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-614000: (3.420857986s)
--- PASS: TestForceSystemdFlag (38.84s)

                                                
                                    
x
+
TestForceSystemdEnv (40.06s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-895000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-895000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (36.459799762s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-895000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-895000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-895000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-895000: (3.427352473s)
--- PASS: TestForceSystemdEnv (40.06s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (7.8s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (7.80s)

                                                
                                    
x
+
TestErrorSpam/setup (36.8s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-658000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-658000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 --driver=hyperkit : (36.800654808s)
--- PASS: TestErrorSpam/setup (36.80s)

                                                
                                    
x
+
TestErrorSpam/start (1.63s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 start --dry-run
--- PASS: TestErrorSpam/start (1.63s)

                                                
                                    
x
+
TestErrorSpam/status (0.52s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 status
--- PASS: TestErrorSpam/status (0.52s)

                                                
                                    
x
+
TestErrorSpam/pause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 pause
--- PASS: TestErrorSpam/pause (1.36s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (155.82s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 stop: (5.388528026s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 stop: (1m15.215758739s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 stop
E0429 12:18:55.229120   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:18:55.237448   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:18:55.248083   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:18:55.269866   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:18:55.310335   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:18:55.392545   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:18:55.553659   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:18:55.875933   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:18:56.517719   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:18:57.799937   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:19:00.361403   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:19:05.483617   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-658000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-658000 stop: (1m15.212950005s)
--- PASS: TestErrorSpam/stop (155.82s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/18774-15855/.minikube/files/etc/test/nested/copy/16301/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (173s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-692000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0429 12:19:15.725109   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:19:36.206948   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:20:17.168019   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
E0429 12:21:39.087212   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-692000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (2m53.003776171s)
--- PASS: TestFunctional/serial/StartWithProxy (173.00s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (41.33s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-692000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-692000 --alsologtostderr -v=8: (41.32710334s)
functional_test.go:659: soft start took 41.32758944s for "functional-692000" cluster.
--- PASS: TestFunctional/serial/SoftStart (41.33s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-692000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 cache add registry.k8s.io/pause:3.1: (1.074397712s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 cache add registry.k8s.io/pause:3.3: (1.020881119s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.04s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local2219806613/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 cache add minikube-local-cache-test:functional-692000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 cache delete minikube-local-cache-test:functional-692000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-692000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.44s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.09s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.18s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-692000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (158.32139ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.96s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 kubectl -- --context functional-692000 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.96s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.37s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-692000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-692000 get pods: (1.371581796s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.37s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (42.18s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-692000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-692000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (42.179582223s)
functional_test.go:757: restart took 42.179695439s for "functional-692000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (42.18s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-692000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.79s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 logs: (2.785787801s)
--- PASS: TestFunctional/serial/LogsCmd (2.79s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.69s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd593991881/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd593991881/001/logs.txt: (2.692636257s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.69s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.24s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-692000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-692000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-692000: exit status 115 (280.816021ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|---------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |            URL            |
	|-----------|-------------|-------------|---------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.52:32424 |
	|-----------|-------------|-------------|---------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-692000 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.24s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-692000 config get cpus: exit status 14 (75.833318ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-692000 config get cpus: exit status 14 (63.37223ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.55s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (13.71s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-692000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-692000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 17659: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (13.71s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.94s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-692000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-692000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (481.499901ms)

                                                
                                                
-- stdout --
	* [functional-692000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:24:47.250502   17620 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:24:47.250779   17620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:47.250785   17620 out.go:304] Setting ErrFile to fd 2...
	I0429 12:24:47.250789   17620 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:47.250972   17620 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:24:47.253345   17620 out.go:298] Setting JSON to false
	I0429 12:24:47.275708   17620 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10457,"bootTime":1714408230,"procs":479,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 12:24:47.275806   17620 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:24:47.297491   17620 out.go:177] * [functional-692000] minikube v1.33.0 on Darwin 14.4.1
	I0429 12:24:47.339678   17620 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 12:24:47.339704   17620 notify.go:220] Checking for updates...
	I0429 12:24:47.361554   17620 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:24:47.382422   17620 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 12:24:47.403598   17620 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:24:47.424289   17620 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:24:47.465355   17620 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:24:47.487452   17620 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:24:47.488177   17620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:24:47.488251   17620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:24:47.497809   17620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61844
	I0429 12:24:47.498182   17620 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:24:47.498620   17620 main.go:141] libmachine: Using API Version  1
	I0429 12:24:47.498630   17620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:24:47.498843   17620 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:24:47.498969   17620 main.go:141] libmachine: (functional-692000) Calling .DriverName
	I0429 12:24:47.499207   17620 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:24:47.499495   17620 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:24:47.499526   17620 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:24:47.507962   17620 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61846
	I0429 12:24:47.508300   17620 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:24:47.508629   17620 main.go:141] libmachine: Using API Version  1
	I0429 12:24:47.508638   17620 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:24:47.508901   17620 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:24:47.509030   17620 main.go:141] libmachine: (functional-692000) Calling .DriverName
	I0429 12:24:47.537553   17620 out.go:177] * Using the hyperkit driver based on existing profile
	I0429 12:24:47.558451   17620 start.go:297] selected driver: hyperkit
	I0429 12:24:47.558476   17620 start.go:901] validating driver "hyperkit" against &{Name:functional-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:functional-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.52 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:24:47.558665   17620 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:24:47.583565   17620 out.go:177] 
	W0429 12:24:47.604672   17620 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0429 12:24:47.625680   17620 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-692000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (0.94s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.52s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-692000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-692000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (522.068058ms)

                                                
                                                
-- stdout --
	* [functional-692000] minikube v1.33.0 sur Darwin 14.4.1
	  - MINIKUBE_LOCATION=18774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 12:24:48.184771   17638 out.go:291] Setting OutFile to fd 1 ...
	I0429 12:24:48.185017   17638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:48.185031   17638 out.go:304] Setting ErrFile to fd 2...
	I0429 12:24:48.185038   17638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 12:24:48.185388   17638 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 12:24:48.187055   17638 out.go:298] Setting JSON to false
	I0429 12:24:48.209461   17638 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":10458,"bootTime":1714408230,"procs":485,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.4.1","kernelVersion":"23.4.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0429 12:24:48.209559   17638 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0429 12:24:48.231383   17638 out.go:177] * [functional-692000] minikube v1.33.0 sur Darwin 14.4.1
	I0429 12:24:48.273455   17638 out.go:177]   - MINIKUBE_LOCATION=18774
	I0429 12:24:48.273598   17638 notify.go:220] Checking for updates...
	I0429 12:24:48.316261   17638 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	I0429 12:24:48.339233   17638 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0429 12:24:48.366747   17638 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0429 12:24:48.386959   17638 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	I0429 12:24:48.407977   17638 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0429 12:24:48.450220   17638 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 12:24:48.450555   17638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:24:48.450597   17638 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:24:48.459478   17638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61854
	I0429 12:24:48.459841   17638 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:24:48.460248   17638 main.go:141] libmachine: Using API Version  1
	I0429 12:24:48.460260   17638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:24:48.460495   17638 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:24:48.460613   17638 main.go:141] libmachine: (functional-692000) Calling .DriverName
	I0429 12:24:48.460799   17638 driver.go:392] Setting default libvirt URI to qemu:///system
	I0429 12:24:48.461053   17638 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 12:24:48.461079   17638 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 12:24:48.469463   17638 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61856
	I0429 12:24:48.469834   17638 main.go:141] libmachine: () Calling .GetVersion
	I0429 12:24:48.470170   17638 main.go:141] libmachine: Using API Version  1
	I0429 12:24:48.470182   17638 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 12:24:48.470412   17638 main.go:141] libmachine: () Calling .GetMachineName
	I0429 12:24:48.470539   17638 main.go:141] libmachine: (functional-692000) Calling .DriverName
	I0429 12:24:48.498953   17638 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0429 12:24:48.540876   17638 start.go:297] selected driver: hyperkit
	I0429 12:24:48.540911   17638 start.go:901] validating driver "hyperkit" against &{Name:functional-692000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18706/minikube-v1.33.0-1713736271-18706-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.43-1713736339-18706@sha256:bccd96633fa59b612ea2e24c6961d2499fe576afbab2e6056a6801ffbd3b1a7e Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.0 ClusterName:functional-692000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.52 Port:8441 KubernetesVersion:v1.30.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:262
80h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0429 12:24:48.541085   17638 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0429 12:24:48.566135   17638 out.go:177] 
	W0429 12:24:48.588781   17638 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0429 12:24:48.609124   17638 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.52s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (6.59s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-692000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-692000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-dttxg" [449d80a1-d354-49d4-bf5f-9e2f80f552a1] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-dttxg" [449d80a1-d354-49d4-bf5f-9e2f80f552a1] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 6.006068272s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.52:31998
functional_test.go:1671: http://192.169.0.52:31998: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-dttxg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.52:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.52:31998
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (6.59s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.27s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (28.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [73c4bcfa-3e69-4776-8f79-76d7b7d0376f] Running
E0429 12:24:22.926763   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.005129429s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-692000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-692000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-692000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-692000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [02f075dc-dbd4-4545-a800-b2327748fc6d] Pending
helpers_test.go:344: "sp-pod" [02f075dc-dbd4-4545-a800-b2327748fc6d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [02f075dc-dbd4-4545-a800-b2327748fc6d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.002677395s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-692000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-692000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-692000 delete -f testdata/storage-provisioner/pod.yaml: (1.057934896s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-692000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [53a3f690-932d-42f0-be76-bfe8dc87a9fa] Pending
helpers_test.go:344: "sp-pod" [53a3f690-932d-42f0-be76-bfe8dc87a9fa] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [53a3f690-932d-42f0-be76-bfe8dc87a9fa] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 8.002659021s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-692000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (28.66s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.32s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh -n functional-692000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 cp functional-692000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd1474655489/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh -n functional-692000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh -n functional-692000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.20s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (26.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-692000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-bg2lq" [c3e3658e-3f01-4718-b74d-9864cd6f8cb5] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-bg2lq" [c3e3658e-3f01-4718-b74d-9864cd6f8cb5] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 23.003374858s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-692000 exec mysql-64454c8b5c-bg2lq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-692000 exec mysql-64454c8b5c-bg2lq -- mysql -ppassword -e "show databases;": exit status 1 (144.908003ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-692000 exec mysql-64454c8b5c-bg2lq -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-692000 exec mysql-64454c8b5c-bg2lq -- mysql -ppassword -e "show databases;": exit status 1 (129.129044ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-692000 exec mysql-64454c8b5c-bg2lq -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (26.31s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/16301/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo cat /etc/test/nested/copy/16301/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/16301.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo cat /etc/ssl/certs/16301.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/16301.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo cat /usr/share/ca-certificates/16301.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/163012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo cat /etc/ssl/certs/163012.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/163012.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo cat /usr/share/ca-certificates/163012.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.39s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-692000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-692000 ssh "sudo systemctl is-active crio": exit status 1 (158.102177ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.65s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.65s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.40s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-692000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.0
registry.k8s.io/kube-proxy:v1.30.0
registry.k8s.io/kube-controller-manager:v1.30.0
registry.k8s.io/kube-apiserver:v1.30.0
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-692000
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-692000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-692000 image ls --format short --alsologtostderr:
I0429 12:24:50.841579   17672 out.go:291] Setting OutFile to fd 1 ...
I0429 12:24:50.841793   17672 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:50.841799   17672 out.go:304] Setting ErrFile to fd 2...
I0429 12:24:50.841802   17672 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:50.841979   17672 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
I0429 12:24:50.842613   17672 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:50.842712   17672 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:50.843713   17672 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:50.843784   17672 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:50.852858   17672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61911
I0429 12:24:50.853300   17672 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:50.853707   17672 main.go:141] libmachine: Using API Version  1
I0429 12:24:50.853717   17672 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:50.853931   17672 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:50.854037   17672 main.go:141] libmachine: (functional-692000) Calling .GetState
I0429 12:24:50.854116   17672 main.go:141] libmachine: (functional-692000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0429 12:24:50.854197   17672 main.go:141] libmachine: (functional-692000) DBG | hyperkit pid from json: 16895
I0429 12:24:50.855499   17672 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:50.855526   17672 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:50.864259   17672 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61913
I0429 12:24:50.864664   17672 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:50.865010   17672 main.go:141] libmachine: Using API Version  1
I0429 12:24:50.865028   17672 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:50.865270   17672 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:50.865394   17672 main.go:141] libmachine: (functional-692000) Calling .DriverName
I0429 12:24:50.865576   17672 ssh_runner.go:195] Run: systemctl --version
I0429 12:24:50.865595   17672 main.go:141] libmachine: (functional-692000) Calling .GetSSHHostname
I0429 12:24:50.865670   17672 main.go:141] libmachine: (functional-692000) Calling .GetSSHPort
I0429 12:24:50.865753   17672 main.go:141] libmachine: (functional-692000) Calling .GetSSHKeyPath
I0429 12:24:50.865851   17672 main.go:141] libmachine: (functional-692000) Calling .GetSSHUsername
I0429 12:24:50.865936   17672 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/functional-692000/id_rsa Username:docker}
I0429 12:24:50.903544   17672 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0429 12:24:50.922122   17672 main.go:141] libmachine: Making call to close driver server
I0429 12:24:50.922131   17672 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:50.922296   17672 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:50.922305   17672 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:24:50.922312   17672 main.go:141] libmachine: Making call to close driver server
I0429 12:24:50.922318   17672 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:50.922321   17672 main.go:141] libmachine: (functional-692000) DBG | Closing plugin on server side
I0429 12:24:50.922473   17672 main.go:141] libmachine: (functional-692000) DBG | Closing plugin on server side
I0429 12:24:50.922515   17672 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:50.922542   17672 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-692000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/nginx                     | alpine            | f4215f6ee683f | 48.3MB |
| registry.k8s.io/kube-apiserver              | v1.30.0           | c42f13656d0b2 | 117MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| gcr.io/google-containers/addon-resizer      | functional-692000 | ffd4cfbbe753e | 32.9MB |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
| registry.k8s.io/kube-controller-manager     | v1.30.0           | c7aad43836fa5 | 111MB  |
| docker.io/kubernetesui/metrics-scraper      | <none>            | 115053965e86b | 43.8MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| docker.io/localhost/my-image                | functional-692000 | 019cffc7cca54 | 1.24MB |
| docker.io/library/minikube-local-cache-test | functional-692000 | e23376378f95b | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.0           | 259c8277fcbbc | 62MB   |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/library/nginx                     | latest            | 7383c266ef252 | 188MB  |
| registry.k8s.io/kube-proxy                  | v1.30.0           | a0bf559e280cf | 84.7MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-692000 image ls --format table --alsologtostderr:
I0429 12:24:53.438428   17698 out.go:291] Setting OutFile to fd 1 ...
I0429 12:24:53.438634   17698 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:53.438639   17698 out.go:304] Setting ErrFile to fd 2...
I0429 12:24:53.438643   17698 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:53.438833   17698 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
I0429 12:24:53.439437   17698 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:53.439533   17698 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:53.439880   17698 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:53.439928   17698 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:53.448065   17698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61946
I0429 12:24:53.448517   17698 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:53.449003   17698 main.go:141] libmachine: Using API Version  1
I0429 12:24:53.449019   17698 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:53.449249   17698 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:53.449368   17698 main.go:141] libmachine: (functional-692000) Calling .GetState
I0429 12:24:53.449512   17698 main.go:141] libmachine: (functional-692000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0429 12:24:53.449602   17698 main.go:141] libmachine: (functional-692000) DBG | hyperkit pid from json: 16895
I0429 12:24:53.450925   17698 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:53.450960   17698 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:53.459437   17698 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61948
I0429 12:24:53.459782   17698 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:53.460120   17698 main.go:141] libmachine: Using API Version  1
I0429 12:24:53.460134   17698 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:53.460341   17698 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:53.460453   17698 main.go:141] libmachine: (functional-692000) Calling .DriverName
I0429 12:24:53.460602   17698 ssh_runner.go:195] Run: systemctl --version
I0429 12:24:53.460631   17698 main.go:141] libmachine: (functional-692000) Calling .GetSSHHostname
I0429 12:24:53.460717   17698 main.go:141] libmachine: (functional-692000) Calling .GetSSHPort
I0429 12:24:53.460789   17698 main.go:141] libmachine: (functional-692000) Calling .GetSSHKeyPath
I0429 12:24:53.460885   17698 main.go:141] libmachine: (functional-692000) Calling .GetSSHUsername
I0429 12:24:53.460967   17698 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/functional-692000/id_rsa Username:docker}
I0429 12:24:53.498564   17698 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0429 12:24:53.531838   17698 main.go:141] libmachine: Making call to close driver server
I0429 12:24:53.531857   17698 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:53.532038   17698 main.go:141] libmachine: (functional-692000) DBG | Closing plugin on server side
I0429 12:24:53.532047   17698 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:53.532055   17698 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:24:53.532064   17698 main.go:141] libmachine: Making call to close driver server
I0429 12:24:53.532068   17698 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:53.532256   17698 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:53.532264   17698 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:24:53.532256   17698 main.go:141] libmachine: (functional-692000) DBG | Closing plugin on server side
2024/04/29 12:25:01 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-692000 image ls --format json --alsologtostderr:
[{"id":"019cffc7cca54e27eee43fa8b0f4626a6870002205f938e18cac520bbb2910e4","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-692000"],"size":"1240000"},{"id":"e23376378f95baba155bf4c55d8de680871e4a2bc4942956d9cb7590993ccfa4","repoDigests":[],"repoTags":["docker.io/library/minikube-local-cache-test:functional-692000"],"size":"30"},{"id":"c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.0"],"size":"111000000"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"6e38f40d628db3002f
5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"48300000"},{"id":"259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced","repoDigests":[],"repoTags":["registry.k8s.io/kube-
scheduler:v1.30.0"],"size":"62000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":[],"repoTags":["gcr.io/google-containers/addon-resizer:functional-692000"],"size":"32900000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.0"],"size":"117000000"},{"id":"a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.0"],"size":"84700000"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"cbb01a7b
d410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":[],"repoTags":["docker.io/kubernetesui/metrics-scraper:\u003cnone\u003e"],"size":"43800000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-692000 image ls --format json --alsologtostderr:
I0429 12:24:53.274635   17694 out.go:291] Setting OutFile to fd 1 ...
I0429 12:24:53.274831   17694 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:53.274837   17694 out.go:304] Setting ErrFile to fd 2...
I0429 12:24:53.274841   17694 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:53.275565   17694 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
I0429 12:24:53.276606   17694 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:53.276699   17694 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:53.277028   17694 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:53.277081   17694 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:53.285384   17694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61941
I0429 12:24:53.285829   17694 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:53.286252   17694 main.go:141] libmachine: Using API Version  1
I0429 12:24:53.286261   17694 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:53.286454   17694 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:53.286562   17694 main.go:141] libmachine: (functional-692000) Calling .GetState
I0429 12:24:53.286655   17694 main.go:141] libmachine: (functional-692000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0429 12:24:53.286713   17694 main.go:141] libmachine: (functional-692000) DBG | hyperkit pid from json: 16895
I0429 12:24:53.287998   17694 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:53.288017   17694 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:53.296305   17694 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61943
I0429 12:24:53.296624   17694 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:53.296941   17694 main.go:141] libmachine: Using API Version  1
I0429 12:24:53.296952   17694 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:53.297152   17694 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:53.297260   17694 main.go:141] libmachine: (functional-692000) Calling .DriverName
I0429 12:24:53.297415   17694 ssh_runner.go:195] Run: systemctl --version
I0429 12:24:53.297434   17694 main.go:141] libmachine: (functional-692000) Calling .GetSSHHostname
I0429 12:24:53.297523   17694 main.go:141] libmachine: (functional-692000) Calling .GetSSHPort
I0429 12:24:53.297615   17694 main.go:141] libmachine: (functional-692000) Calling .GetSSHKeyPath
I0429 12:24:53.297691   17694 main.go:141] libmachine: (functional-692000) Calling .GetSSHUsername
I0429 12:24:53.297791   17694 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/functional-692000/id_rsa Username:docker}
I0429 12:24:53.335618   17694 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0429 12:24:53.352170   17694 main.go:141] libmachine: Making call to close driver server
I0429 12:24:53.352178   17694 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:53.352324   17694 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:53.352335   17694 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:24:53.352341   17694 main.go:141] libmachine: Making call to close driver server
I0429 12:24:53.352348   17694 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:53.352380   17694 main.go:141] libmachine: (functional-692000) DBG | Closing plugin on server side
I0429 12:24:53.352474   17694 main.go:141] libmachine: (functional-692000) DBG | Closing plugin on server side
I0429 12:24:53.352477   17694 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:53.352492   17694 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-692000 image ls --format yaml --alsologtostderr:
- id: 259c8277fcbbc9e1cf308bc0b50582a180eb8cb8929dc8b870fa16660934bced
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.0
size: "62000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: e23376378f95baba155bf4c55d8de680871e4a2bc4942956d9cb7590993ccfa4
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-692000
size: "30"
- id: 7383c266ef252ad70806f3072ee8e63d2a16d1e6bafa6146a2da867fc7c41759
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: f4215f6ee683f29c0a4611b02d1adc3b7d986a96ab894eb5f7b9437c862c9499
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "48300000"
- id: a0bf559e280cf431fceb938087d59deeebcf29cbf3706746e07f7ac08e80ba0b
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.0
size: "84700000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: c42f13656d0b2e905ee7977f67ea7a17715b24fae9daca1fcfb303cdb90728f0
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.0
size: "117000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: c7aad43836fa5bd41152db04ba4c90f8e9451c40e06488442242582e5e112b1b
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.0
size: "111000000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests: []
repoTags:
- gcr.io/google-containers/addon-resizer:functional-692000
size: "32900000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-692000 image ls --format yaml --alsologtostderr:
I0429 12:24:51.010089   17677 out.go:291] Setting OutFile to fd 1 ...
I0429 12:24:51.010289   17677 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:51.010295   17677 out.go:304] Setting ErrFile to fd 2...
I0429 12:24:51.010299   17677 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:51.010492   17677 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
I0429 12:24:51.011132   17677 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:51.011230   17677 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:51.011641   17677 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:51.011682   17677 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:51.020038   17677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61916
I0429 12:24:51.020444   17677 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:51.020852   17677 main.go:141] libmachine: Using API Version  1
I0429 12:24:51.020871   17677 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:51.021088   17677 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:51.021198   17677 main.go:141] libmachine: (functional-692000) Calling .GetState
I0429 12:24:51.021275   17677 main.go:141] libmachine: (functional-692000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0429 12:24:51.021347   17677 main.go:141] libmachine: (functional-692000) DBG | hyperkit pid from json: 16895
I0429 12:24:51.022665   17677 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:51.022691   17677 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:51.031366   17677 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61918
I0429 12:24:51.031763   17677 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:51.032118   17677 main.go:141] libmachine: Using API Version  1
I0429 12:24:51.032132   17677 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:51.032382   17677 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:51.032516   17677 main.go:141] libmachine: (functional-692000) Calling .DriverName
I0429 12:24:51.032693   17677 ssh_runner.go:195] Run: systemctl --version
I0429 12:24:51.032715   17677 main.go:141] libmachine: (functional-692000) Calling .GetSSHHostname
I0429 12:24:51.032800   17677 main.go:141] libmachine: (functional-692000) Calling .GetSSHPort
I0429 12:24:51.032882   17677 main.go:141] libmachine: (functional-692000) Calling .GetSSHKeyPath
I0429 12:24:51.032980   17677 main.go:141] libmachine: (functional-692000) Calling .GetSSHUsername
I0429 12:24:51.033071   17677 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/functional-692000/id_rsa Username:docker}
I0429 12:24:51.071348   17677 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0429 12:24:51.088740   17677 main.go:141] libmachine: Making call to close driver server
I0429 12:24:51.088749   17677 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:51.088895   17677 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:51.088904   17677 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:24:51.088912   17677 main.go:141] libmachine: Making call to close driver server
I0429 12:24:51.088924   17677 main.go:141] libmachine: (functional-692000) DBG | Closing plugin on server side
I0429 12:24:51.088951   17677 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:51.089077   17677 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:51.089085   17677 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:24:51.089096   17677 main.go:141] libmachine: (functional-692000) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-692000 ssh pgrep buildkitd: exit status 1 (136.663939ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image build -t localhost/my-image:functional-692000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 image build -t localhost/my-image:functional-692000 testdata/build --alsologtostderr: (1.798339587s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-692000 image build -t localhost/my-image:functional-692000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in b6d67c79699b
---> Removed intermediate container b6d67c79699b
---> a767d34220e1
Step 3/3 : ADD content.txt /
---> 019cffc7cca5
Successfully built 019cffc7cca5
Successfully tagged localhost/my-image:functional-692000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-692000 image build -t localhost/my-image:functional-692000 testdata/build --alsologtostderr:
I0429 12:24:51.312730   17686 out.go:291] Setting OutFile to fd 1 ...
I0429 12:24:51.314191   17686 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:51.314199   17686 out.go:304] Setting ErrFile to fd 2...
I0429 12:24:51.314203   17686 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0429 12:24:51.314385   17686 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
I0429 12:24:51.314954   17686 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:51.315627   17686 config.go:182] Loaded profile config "functional-692000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
I0429 12:24:51.315976   17686 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:51.316016   17686 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:51.324422   17686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61929
I0429 12:24:51.324856   17686 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:51.325286   17686 main.go:141] libmachine: Using API Version  1
I0429 12:24:51.325296   17686 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:51.325527   17686 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:51.325651   17686 main.go:141] libmachine: (functional-692000) Calling .GetState
I0429 12:24:51.325737   17686 main.go:141] libmachine: (functional-692000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0429 12:24:51.325810   17686 main.go:141] libmachine: (functional-692000) DBG | hyperkit pid from json: 16895
I0429 12:24:51.327080   17686 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0429 12:24:51.327099   17686 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0429 12:24:51.335363   17686 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:61931
I0429 12:24:51.335704   17686 main.go:141] libmachine: () Calling .GetVersion
I0429 12:24:51.336071   17686 main.go:141] libmachine: Using API Version  1
I0429 12:24:51.336087   17686 main.go:141] libmachine: () Calling .SetConfigRaw
I0429 12:24:51.336296   17686 main.go:141] libmachine: () Calling .GetMachineName
I0429 12:24:51.336419   17686 main.go:141] libmachine: (functional-692000) Calling .DriverName
I0429 12:24:51.336574   17686 ssh_runner.go:195] Run: systemctl --version
I0429 12:24:51.336595   17686 main.go:141] libmachine: (functional-692000) Calling .GetSSHHostname
I0429 12:24:51.336698   17686 main.go:141] libmachine: (functional-692000) Calling .GetSSHPort
I0429 12:24:51.336777   17686 main.go:141] libmachine: (functional-692000) Calling .GetSSHKeyPath
I0429 12:24:51.336855   17686 main.go:141] libmachine: (functional-692000) Calling .GetSSHUsername
I0429 12:24:51.336951   17686 sshutil.go:53] new ssh client: &{IP:192.169.0.52 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/functional-692000/id_rsa Username:docker}
I0429 12:24:51.373682   17686 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2476440236.tar
I0429 12:24:51.373754   17686 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0429 12:24:51.382063   17686 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.2476440236.tar
I0429 12:24:51.385411   17686 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.2476440236.tar: stat -c "%s %y" /var/lib/minikube/build/build.2476440236.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.2476440236.tar': No such file or directory
I0429 12:24:51.385440   17686 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2476440236.tar --> /var/lib/minikube/build/build.2476440236.tar (3072 bytes)
I0429 12:24:51.405773   17686 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.2476440236
I0429 12:24:51.415213   17686 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.2476440236 -xf /var/lib/minikube/build/build.2476440236.tar
I0429 12:24:51.423317   17686 docker.go:360] Building image: /var/lib/minikube/build/build.2476440236
I0429 12:24:51.423380   17686 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-692000 /var/lib/minikube/build/build.2476440236
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0429 12:24:52.993218   17686 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-692000 /var/lib/minikube/build/build.2476440236: (1.569842583s)
I0429 12:24:52.993276   17686 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.2476440236
I0429 12:24:53.003302   17686 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.2476440236.tar
I0429 12:24:53.019654   17686 build_images.go:217] Built localhost/my-image:functional-692000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.2476440236.tar
I0429 12:24:53.019679   17686 build_images.go:133] succeeded building to: functional-692000
I0429 12:24:53.019683   17686 build_images.go:134] failed building to: 
I0429 12:24:53.019700   17686 main.go:141] libmachine: Making call to close driver server
I0429 12:24:53.019707   17686 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:53.019854   17686 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:53.019862   17686 main.go:141] libmachine: Making call to close connection to plugin binary
I0429 12:24:53.019869   17686 main.go:141] libmachine: Making call to close driver server
I0429 12:24:53.019875   17686 main.go:141] libmachine: (functional-692000) Calling .Close
I0429 12:24:53.020032   17686 main.go:141] libmachine: Successfully made call to close driver server
I0429 12:24:53.020038   17686 main.go:141] libmachine: (functional-692000) DBG | Closing plugin on server side
I0429 12:24:53.020041   17686 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.10s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (2.4s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (2.296275579s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-692000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (2.40s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.8s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-692000 docker-env) && out/minikube-darwin-amd64 status -p functional-692000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-692000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.80s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image load --daemon gcr.io/google-containers/addon-resizer:functional-692000 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 image load --daemon gcr.io/google-containers/addon-resizer:functional-692000 --alsologtostderr: (3.184119375s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (3.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image load --daemon gcr.io/google-containers/addon-resizer:functional-692000 --alsologtostderr
E0429 12:23:55.225486   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
functional_test.go:364: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 image load --daemon gcr.io/google-containers/addon-resizer:functional-692000 --alsologtostderr: (1.951182593s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (2.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.890923351s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-692000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image load --daemon gcr.io/google-containers/addon-resizer:functional-692000 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 image load --daemon gcr.io/google-containers/addon-resizer:functional-692000 --alsologtostderr: (3.064260681s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (5.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image save gcr.io/google-containers/addon-resizer:functional-692000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 image save gcr.io/google-containers/addon-resizer:functional-692000 /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.165253792s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.17s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image rm gcr.io/google-containers/addon-resizer:functional-692000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 image load /Users/jenkins/workspace/addon-resizer-save.tar --alsologtostderr: (1.322445029s)
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.23s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-692000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 image save --daemon gcr.io/google-containers/addon-resizer:functional-692000 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-darwin-amd64 -p functional-692000 image save --daemon gcr.io/google-containers/addon-resizer:functional-692000 --alsologtostderr: (1.133075835s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-692000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.23s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (13.15s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-692000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-692000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-bfbpg" [1a2fd811-fa73-472b-8141-9e665381a522] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-bfbpg" [1a2fd811-fa73-472b-8141-9e665381a522] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 13.006176236s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (13.15s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-692000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-692000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-692000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-692000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 17372: os: process already finished
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.37s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-692000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-692000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [c303f3c1-3527-4636-8695-3b91d0bbf707] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [c303f3c1-3527-4636-8695-3b91d0bbf707] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 11.002900032s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (11.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 service list -o json
functional_test.go:1490: Took "382.355389ms" to run "out/minikube-darwin-amd64 -p functional-692000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.52:30965
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.52:30965
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.28s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-692000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.99.135.224 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-692000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.33s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "217.189873ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "86.045698ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.30s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "221.983975ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "86.748763ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4087152381/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1714418678232686000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4087152381/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1714418678232686000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4087152381/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1714418678232686000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4087152381/001/test-1714418678232686000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.465589ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Apr 29 19:24 created-by-test
-rw-r--r-- 1 docker docker 24 Apr 29 19:24 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Apr 29 19:24 test-1714418678232686000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh cat /mount-9p/test-1714418678232686000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-692000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [aebe5c28-1418-448a-9fab-542f89fe0ae0] Pending
helpers_test.go:344: "busybox-mount" [aebe5c28-1418-448a-9fab-542f89fe0ae0] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [aebe5c28-1418-448a-9fab-542f89fe0ae0] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [aebe5c28-1418-448a-9fab-542f89fe0ae0] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.005278606s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-692000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port4087152381/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.03s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port929628998/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (160.517219ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port929628998/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-692000 ssh "sudo umount -f /mount-9p": exit status 1 (138.537742ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-692000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port929628998/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.35s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1487767947/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1487767947/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1487767947/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T" /mount1: exit status 1 (143.170021ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-692000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-692000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1487767947/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1487767947/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-692000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup1487767947/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.58s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-692000
--- PASS: TestFunctional/delete_addon-resizer_images (0.12s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-692000
--- PASS: TestFunctional/delete_my-image_image (0.05s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-692000
--- PASS: TestFunctional/delete_minikube_cached_images (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-431000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (39.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-002000 --driver=hyperkit 
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-002000 --driver=hyperkit : (39.531053103s)
--- PASS: TestImageBuild/serial/Setup (39.53s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.31s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-002000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-002000: (1.312024975s)
--- PASS: TestImageBuild/serial/NormalBuild (1.31s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-002000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-002000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-002000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (93.1s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-476000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
E0429 12:58:51.721962   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 12:58:55.323168   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-476000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (1m33.100044315s)
--- PASS: TestJSONOutput/start/Command (93.10s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-476000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.47s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-476000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.47s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.36s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-476000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-476000 --output=json --user=testUser: (8.355924883s)
--- PASS: TestJSONOutput/stop/Command (8.36s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.77s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-650000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-650000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (393.325176ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"6984f56c-9ddd-4d61-b70c-298750454929","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-650000] minikube v1.33.0 on Darwin 14.4.1","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"ce42f136-a820-4c58-9c15-5dd2ca65d86d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18774"}}
	{"specversion":"1.0","id":"bf65fa21-f9b7-406c-94c0-217b9d515385","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig"}}
	{"specversion":"1.0","id":"a3d9601a-ed50-4de2-a13b-5681c09519b3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"ea3dff00-3542-4591-ac6e-33540d07070d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"dd549893-59d9-4b5c-8e8c-77e7e54fdca8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube"}}
	{"specversion":"1.0","id":"194db10d-d9ad-4418-98f6-99d2ec12d80d","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"dbff7b28-62c2-41af-b91a-54bda843f457","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-650000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-650000
--- PASS: TestErrorJSONOutput (0.77s)

                                                
                                    
x
+
TestMainNoArgs (0.09s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.09s)

                                                
                                    
x
+
TestMinikubeProfile (91.24s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-595000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-595000 --driver=hyperkit : (39.792877595s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-597000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-597000 --driver=hyperkit : (39.910039151s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-595000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-597000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-597000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-597000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-597000: (5.288956817s)
helpers_test.go:175: Cleaning up "first-595000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-595000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-595000: (5.315839778s)
--- PASS: TestMinikubeProfile (91.24s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.05s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-833000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
E0429 13:01:54.775173   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-833000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.045199192s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.05s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-833000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-833000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.31s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (21.23s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-843000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-843000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.229953239s)
--- PASS: TestMountStart/serial/StartWithMountSecond (21.23s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.31s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (2.39s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p mount-start-1-833000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p mount-start-1-833000 --alsologtostderr -v=5: (2.387319491s)
--- PASS: TestMountStart/serial/DeleteFirst (2.39s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.31s)

                                                
                                    
x
+
TestMountStart/serial/Stop (2.39s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 stop -p mount-start-2-843000
mount_start_test.go:155: (dbg) Done: out/minikube-darwin-amd64 stop -p mount-start-2-843000: (2.393705294s)
--- PASS: TestMountStart/serial/Stop (2.39s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (18.34s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-2-843000
mount_start_test.go:166: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-2-843000: (17.34051785s)
--- PASS: TestMountStart/serial/RestartStopped (18.34s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-2-843000 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.32s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (95.45s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-181000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0429 13:03:51.767549   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 13:03:55.367943   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-181000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (1m35.195339077s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (95.45s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-181000 -- rollout status deployment/busybox: (2.542725207s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-g7n7t -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-nrg7c -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-g7n7t -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-nrg7c -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-g7n7t -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-nrg7c -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.23s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-g7n7t -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-g7n7t -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-nrg7c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-181000 -- exec busybox-fc5497c4f-nrg7c -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.92s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (35.26s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-181000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-181000 -v 3 --alsologtostderr: (34.943656745s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (35.26s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-181000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.21s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.21s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.46s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp testdata/cp-test.txt multinode-181000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp multinode-181000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3061042506/001/cp-test_multinode-181000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp multinode-181000:/home/docker/cp-test.txt multinode-181000-m02:/home/docker/cp-test_multinode-181000_multinode-181000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m02 "sudo cat /home/docker/cp-test_multinode-181000_multinode-181000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp multinode-181000:/home/docker/cp-test.txt multinode-181000-m03:/home/docker/cp-test_multinode-181000_multinode-181000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m03 "sudo cat /home/docker/cp-test_multinode-181000_multinode-181000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp testdata/cp-test.txt multinode-181000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp multinode-181000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3061042506/001/cp-test_multinode-181000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp multinode-181000-m02:/home/docker/cp-test.txt multinode-181000:/home/docker/cp-test_multinode-181000-m02_multinode-181000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000 "sudo cat /home/docker/cp-test_multinode-181000-m02_multinode-181000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp multinode-181000-m02:/home/docker/cp-test.txt multinode-181000-m03:/home/docker/cp-test_multinode-181000-m02_multinode-181000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m03 "sudo cat /home/docker/cp-test_multinode-181000-m02_multinode-181000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp testdata/cp-test.txt multinode-181000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp multinode-181000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile3061042506/001/cp-test_multinode-181000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp multinode-181000-m03:/home/docker/cp-test.txt multinode-181000:/home/docker/cp-test_multinode-181000-m03_multinode-181000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000 "sudo cat /home/docker/cp-test_multinode-181000-m03_multinode-181000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 cp multinode-181000-m03:/home/docker/cp-test.txt multinode-181000-m02:/home/docker/cp-test_multinode-181000-m03_multinode-181000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 ssh -n multinode-181000-m02 "sudo cat /home/docker/cp-test_multinode-181000-m03_multinode-181000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.46s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-181000 node stop m03: (2.338145519s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-181000 status: exit status 7 (256.886534ms)

                                                
                                                
-- stdout --
	multinode-181000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-181000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-181000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-181000 status --alsologtostderr: exit status 7 (258.873206ms)

                                                
                                                
-- stdout --
	multinode-181000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-181000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-181000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:05:24.935058   19437 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:05:24.935257   19437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:05:24.935263   19437 out.go:304] Setting ErrFile to fd 2...
	I0429 13:05:24.935267   19437 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:05:24.935451   19437 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 13:05:24.935626   19437 out.go:298] Setting JSON to false
	I0429 13:05:24.935647   19437 mustload.go:65] Loading cluster: multinode-181000
	I0429 13:05:24.935683   19437 notify.go:220] Checking for updates...
	I0429 13:05:24.935981   19437 config.go:182] Loaded profile config "multinode-181000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:05:24.935994   19437 status.go:255] checking status of multinode-181000 ...
	I0429 13:05:24.936368   19437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 13:05:24.936409   19437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 13:05:24.945029   19437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:63216
	I0429 13:05:24.945383   19437 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:05:24.945790   19437 main.go:141] libmachine: Using API Version  1
	I0429 13:05:24.945799   19437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:05:24.946022   19437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:05:24.946143   19437 main.go:141] libmachine: (multinode-181000) Calling .GetState
	I0429 13:05:24.946290   19437 main.go:141] libmachine: (multinode-181000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 13:05:24.946320   19437 main.go:141] libmachine: (multinode-181000) DBG | hyperkit pid from json: 19099
	I0429 13:05:24.947496   19437 status.go:330] multinode-181000 host status = "Running" (err=<nil>)
	I0429 13:05:24.947516   19437 host.go:66] Checking if "multinode-181000" exists ...
	I0429 13:05:24.947761   19437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 13:05:24.947781   19437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 13:05:24.956213   19437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:63218
	I0429 13:05:24.956579   19437 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:05:24.956894   19437 main.go:141] libmachine: Using API Version  1
	I0429 13:05:24.956919   19437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:05:24.957210   19437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:05:24.957325   19437 main.go:141] libmachine: (multinode-181000) Calling .GetIP
	I0429 13:05:24.957414   19437 host.go:66] Checking if "multinode-181000" exists ...
	I0429 13:05:24.957666   19437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 13:05:24.957692   19437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 13:05:24.966315   19437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:63220
	I0429 13:05:24.966669   19437 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:05:24.967033   19437 main.go:141] libmachine: Using API Version  1
	I0429 13:05:24.967051   19437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:05:24.967237   19437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:05:24.967344   19437 main.go:141] libmachine: (multinode-181000) Calling .DriverName
	I0429 13:05:24.967479   19437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:05:24.967502   19437 main.go:141] libmachine: (multinode-181000) Calling .GetSSHHostname
	I0429 13:05:24.967580   19437 main.go:141] libmachine: (multinode-181000) Calling .GetSSHPort
	I0429 13:05:24.967670   19437 main.go:141] libmachine: (multinode-181000) Calling .GetSSHKeyPath
	I0429 13:05:24.967749   19437 main.go:141] libmachine: (multinode-181000) Calling .GetSSHUsername
	I0429 13:05:24.967833   19437 sshutil.go:53] new ssh client: &{IP:192.169.0.62 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/multinode-181000/id_rsa Username:docker}
	I0429 13:05:24.998231   19437 ssh_runner.go:195] Run: systemctl --version
	I0429 13:05:25.003075   19437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:05:25.014480   19437 kubeconfig.go:125] found "multinode-181000" server: "https://192.169.0.62:8443"
	I0429 13:05:25.014505   19437 api_server.go:166] Checking apiserver status ...
	I0429 13:05:25.014537   19437 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0429 13:05:25.029246   19437 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1875/cgroup
	W0429 13:05:25.037986   19437 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1875/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0429 13:05:25.038053   19437 ssh_runner.go:195] Run: ls
	I0429 13:05:25.041470   19437 api_server.go:253] Checking apiserver healthz at https://192.169.0.62:8443/healthz ...
	I0429 13:05:25.044583   19437 api_server.go:279] https://192.169.0.62:8443/healthz returned 200:
	ok
	I0429 13:05:25.044595   19437 status.go:422] multinode-181000 apiserver status = Running (err=<nil>)
	I0429 13:05:25.044604   19437 status.go:257] multinode-181000 status: &{Name:multinode-181000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:05:25.044616   19437 status.go:255] checking status of multinode-181000-m02 ...
	I0429 13:05:25.044857   19437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 13:05:25.044889   19437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 13:05:25.053641   19437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:63224
	I0429 13:05:25.054021   19437 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:05:25.054367   19437 main.go:141] libmachine: Using API Version  1
	I0429 13:05:25.054379   19437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:05:25.054620   19437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:05:25.054742   19437 main.go:141] libmachine: (multinode-181000-m02) Calling .GetState
	I0429 13:05:25.054834   19437 main.go:141] libmachine: (multinode-181000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 13:05:25.054922   19437 main.go:141] libmachine: (multinode-181000-m02) DBG | hyperkit pid from json: 19145
	I0429 13:05:25.056143   19437 status.go:330] multinode-181000-m02 host status = "Running" (err=<nil>)
	I0429 13:05:25.056153   19437 host.go:66] Checking if "multinode-181000-m02" exists ...
	I0429 13:05:25.056405   19437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 13:05:25.056432   19437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 13:05:25.065738   19437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:63226
	I0429 13:05:25.066103   19437 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:05:25.066414   19437 main.go:141] libmachine: Using API Version  1
	I0429 13:05:25.066425   19437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:05:25.066633   19437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:05:25.066742   19437 main.go:141] libmachine: (multinode-181000-m02) Calling .GetIP
	I0429 13:05:25.066824   19437 host.go:66] Checking if "multinode-181000-m02" exists ...
	I0429 13:05:25.067072   19437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 13:05:25.067093   19437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 13:05:25.075731   19437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:63228
	I0429 13:05:25.076082   19437 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:05:25.076397   19437 main.go:141] libmachine: Using API Version  1
	I0429 13:05:25.076407   19437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:05:25.076615   19437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:05:25.076725   19437 main.go:141] libmachine: (multinode-181000-m02) Calling .DriverName
	I0429 13:05:25.076851   19437 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0429 13:05:25.076862   19437 main.go:141] libmachine: (multinode-181000-m02) Calling .GetSSHHostname
	I0429 13:05:25.076943   19437 main.go:141] libmachine: (multinode-181000-m02) Calling .GetSSHPort
	I0429 13:05:25.077021   19437 main.go:141] libmachine: (multinode-181000-m02) Calling .GetSSHKeyPath
	I0429 13:05:25.077107   19437 main.go:141] libmachine: (multinode-181000-m02) Calling .GetSSHUsername
	I0429 13:05:25.077196   19437 sshutil.go:53] new ssh client: &{IP:192.169.0.63 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/18774-15855/.minikube/machines/multinode-181000-m02/id_rsa Username:docker}
	I0429 13:05:25.107635   19437 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0429 13:05:25.118654   19437 status.go:257] multinode-181000-m02 status: &{Name:multinode-181000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:05:25.118676   19437 status.go:255] checking status of multinode-181000-m03 ...
	I0429 13:05:25.118941   19437 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 13:05:25.118963   19437 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 13:05:25.127599   19437 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:63231
	I0429 13:05:25.127944   19437 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:05:25.128250   19437 main.go:141] libmachine: Using API Version  1
	I0429 13:05:25.128261   19437 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:05:25.128472   19437 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:05:25.128587   19437 main.go:141] libmachine: (multinode-181000-m03) Calling .GetState
	I0429 13:05:25.128677   19437 main.go:141] libmachine: (multinode-181000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 13:05:25.128748   19437 main.go:141] libmachine: (multinode-181000-m03) DBG | hyperkit pid from json: 19226
	I0429 13:05:25.129910   19437 main.go:141] libmachine: (multinode-181000-m03) DBG | hyperkit pid 19226 missing from process table
	I0429 13:05:25.129937   19437 status.go:330] multinode-181000-m03 host status = "Stopped" (err=<nil>)
	I0429 13:05:25.129943   19437 status.go:343] host is not running, skipping remaining checks
	I0429 13:05:25.129950   19437 status.go:257] multinode-181000-m03 status: &{Name:multinode-181000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.85s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (26.71s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-181000 node start m03 -v=7 --alsologtostderr: (26.340519371s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (26.71s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (146.49s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-181000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-181000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-181000: (18.861497719s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-181000 --wait=true -v=8 --alsologtostderr
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-181000 --wait=true -v=8 --alsologtostderr: (2m7.500911957s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-181000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (146.49s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-181000 node delete m03: (3.042893107s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.39s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.82s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-181000 stop: (16.646101748s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status
E0429 13:08:38.432333   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-181000 status: exit status 7 (87.272024ms)

                                                
                                                
-- stdout --
	multinode-181000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-181000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-181000 status --alsologtostderr: exit status 7 (87.494188ms)

                                                
                                                
-- stdout --
	multinode-181000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-181000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0429 13:08:38.521702   19637 out.go:291] Setting OutFile to fd 1 ...
	I0429 13:08:38.522444   19637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:08:38.522454   19637 out.go:304] Setting ErrFile to fd 2...
	I0429 13:08:38.522461   19637 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0429 13:08:38.523034   19637 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/18774-15855/.minikube/bin
	I0429 13:08:38.523231   19637 out.go:298] Setting JSON to false
	I0429 13:08:38.523255   19637 mustload.go:65] Loading cluster: multinode-181000
	I0429 13:08:38.523298   19637 notify.go:220] Checking for updates...
	I0429 13:08:38.523541   19637 config.go:182] Loaded profile config "multinode-181000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.0
	I0429 13:08:38.523555   19637 status.go:255] checking status of multinode-181000 ...
	I0429 13:08:38.523908   19637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 13:08:38.523949   19637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 13:08:38.532568   19637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:63462
	I0429 13:08:38.533013   19637 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:08:38.533436   19637 main.go:141] libmachine: Using API Version  1
	I0429 13:08:38.533445   19637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:08:38.533657   19637 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:08:38.533762   19637 main.go:141] libmachine: (multinode-181000) Calling .GetState
	I0429 13:08:38.533842   19637 main.go:141] libmachine: (multinode-181000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 13:08:38.533910   19637 main.go:141] libmachine: (multinode-181000) DBG | hyperkit pid from json: 19517
	I0429 13:08:38.534817   19637 main.go:141] libmachine: (multinode-181000) DBG | hyperkit pid 19517 missing from process table
	I0429 13:08:38.534849   19637 status.go:330] multinode-181000 host status = "Stopped" (err=<nil>)
	I0429 13:08:38.534855   19637 status.go:343] host is not running, skipping remaining checks
	I0429 13:08:38.534862   19637 status.go:257] multinode-181000 status: &{Name:multinode-181000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0429 13:08:38.534878   19637 status.go:255] checking status of multinode-181000-m02 ...
	I0429 13:08:38.535130   19637 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0429 13:08:38.535150   19637 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0429 13:08:38.543428   19637 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:63464
	I0429 13:08:38.543747   19637 main.go:141] libmachine: () Calling .GetVersion
	I0429 13:08:38.544086   19637 main.go:141] libmachine: Using API Version  1
	I0429 13:08:38.544108   19637 main.go:141] libmachine: () Calling .SetConfigRaw
	I0429 13:08:38.544364   19637 main.go:141] libmachine: () Calling .GetMachineName
	I0429 13:08:38.544488   19637 main.go:141] libmachine: (multinode-181000-m02) Calling .GetState
	I0429 13:08:38.544582   19637 main.go:141] libmachine: (multinode-181000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0429 13:08:38.544650   19637 main.go:141] libmachine: (multinode-181000-m02) DBG | hyperkit pid from json: 19543
	I0429 13:08:38.545551   19637 main.go:141] libmachine: (multinode-181000-m02) DBG | hyperkit pid 19543 missing from process table
	I0429 13:08:38.545585   19637 status.go:330] multinode-181000-m02 host status = "Stopped" (err=<nil>)
	I0429 13:08:38.545594   19637 status.go:343] host is not running, skipping remaining checks
	I0429 13:08:38.545602   19637 status.go:257] multinode-181000-m02 status: &{Name:multinode-181000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.82s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (214.01s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-181000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0429 13:08:51.769383   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 13:08:55.370352   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-181000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (3m33.671502569s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-181000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (214.01s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (47.92s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-181000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-181000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-181000-m02 --driver=hyperkit : exit status 14 (403.387949ms)

                                                
                                                
-- stdout --
	* [multinode-181000-m02] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-181000-m02' is duplicated with machine name 'multinode-181000-m02' in profile 'multinode-181000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-181000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-181000-m03 --driver=hyperkit : (39.665275341s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-181000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-181000: exit status 80 (279.898042ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-181000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-181000-m03 already exists in multinode-181000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-181000-m03
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-181000-m03: (7.506012481s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (47.92s)

                                                
                                    
x
+
TestPreload (170.48s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-635000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
E0429 13:13:51.768776   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 13:13:55.369380   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-635000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m22.082232507s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-635000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-635000 image pull gcr.io/k8s-minikube/busybox: (1.303882066s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-635000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-635000: (8.394238149s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-635000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-635000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (1m13.253003863s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-635000 image list
helpers_test.go:175: Cleaning up "test-preload-635000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-635000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-635000: (5.283529201s)
--- PASS: TestPreload (170.48s)

                                                
                                    
x
+
TestScheduledStopUnix (226.4s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-404000 --memory=2048 --driver=hyperkit 
E0429 13:18:34.823864   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-404000 --memory=2048 --driver=hyperkit : (2m34.820035392s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-404000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-404000 -n scheduled-stop-404000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-404000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-404000 --cancel-scheduled
E0429 13:18:51.768490   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
E0429 13:18:55.369370   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-404000 -n scheduled-stop-404000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-404000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-404000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-404000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-404000: exit status 7 (80.878151ms)

                                                
                                                
-- stdout --
	scheduled-stop-404000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-404000 -n scheduled-stop-404000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-404000 -n scheduled-stop-404000: exit status 7 (76.670549ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-404000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-404000
--- PASS: TestScheduledStopUnix (226.40s)

                                                
                                    
x
+
TestSkaffold (114.96s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe987951828 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe987951828 version: (1.506554707s)
skaffold_test.go:63: skaffold version: v2.11.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-330000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-330000 --memory=2600 --driver=hyperkit : (38.229047839s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe987951828 run --minikube-profile skaffold-330000 --kube-context skaffold-330000 --status-check=true --port-forward=false --interactive=false
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe987951828 run --minikube-profile skaffold-330000 --kube-context skaffold-330000 --status-check=true --port-forward=false --interactive=false: (56.779000747s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-787fd9b4bb-xw625" [c87cd7ec-535e-44b2-bb25-9ff5f6fd5458] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.003940241s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-898c45d55-d4vv8" [c1cb6f38-8355-496b-a16d-db3267555cd4] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.004118423s
helpers_test.go:175: Cleaning up "skaffold-330000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-330000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-330000: (5.300746056s)
--- PASS: TestSkaffold (114.96s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (110.54s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.761381851 start -p running-upgrade-300000 --memory=2200 --vm-driver=hyperkit 
E0429 13:26:26.515426   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:26.520542   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:26.531391   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:26.552910   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:26.594014   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:26.674409   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:26.834728   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:27.155126   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:27.795566   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:29.075723   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:31.637618   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:36.759785   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:26:47.001093   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
E0429 13:27:07.482371   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.761381851 start -p running-upgrade-300000 --memory=2200 --vm-driver=hyperkit : (1m7.476815126s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-300000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-300000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (35.869033033s)
helpers_test.go:175: Cleaning up "running-upgrade-300000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-300000
E0429 13:27:48.443607   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-300000: (5.2778407s)
--- PASS: TestRunningBinaryUpgrade (110.54s)

                                                
                                    
x
+
TestKubernetesUpgrade (121.46s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-060000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-060000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (54.741624556s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-060000
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-060000: (2.43709401s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-060000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-060000 status --format={{.Host}}: exit status 7 (80.514568ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-060000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit 
E0429 13:28:51.789630   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/functional-692000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-060000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit : (34.152503241s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-060000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-060000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-060000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (519.932637ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-060000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.30.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-060000
	    minikube start -p kubernetes-upgrade-060000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0600002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.30.0, by running:
	    
	    minikube start -p kubernetes-upgrade-060000 --kubernetes-version=v1.30.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-060000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-060000 --memory=2200 --kubernetes-version=v1.30.0 --alsologtostderr -v=1 --driver=hyperkit : (24.19144355s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-060000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-060000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-060000: (5.276717767s)
--- PASS: TestKubernetesUpgrade (121.46s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.01s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18774
- KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current178520603/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current178520603/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current178520603/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current178520603/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (3.01s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.28s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.0 on darwin
- MINIKUBE_LOCATION=18774
- KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3925488634/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3925488634/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3925488634/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current3925488634/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (6.28s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.4s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.40s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (87.75s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.504917669 start -p stopped-upgrade-741000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.504917669 start -p stopped-upgrade-741000 --memory=2200 --vm-driver=hyperkit : (40.642135328s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.504917669 -p stopped-upgrade-741000 stop
E0429 13:28:55.388208   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/addons-531000/client.crt: no such file or directory
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.504917669 -p stopped-upgrade-741000 stop: (8.249406378s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-741000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
E0429 13:29:10.363736   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-741000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (38.8618095s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (87.75s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (3.08s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-741000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-741000: (3.084485325s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (3.08s)

                                                
                                    
x
+
TestPause/serial/Start (171.95s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-979000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-979000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (2m51.951279506s)
--- PASS: TestPause/serial/Start (171.95s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.51s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-636000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-636000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (514.306998ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-636000] minikube v1.33.0 on Darwin 14.4.1
	  - MINIKUBE_LOCATION=18774
	  - KUBECONFIG=/Users/jenkins/minikube-integration/18774-15855/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/18774-15855/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.51s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (51.4s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-636000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-636000 --driver=hyperkit : (51.219680661s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-636000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (51.40s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.56s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-636000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-636000 --no-kubernetes --driver=hyperkit : (5.969131822s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-636000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-636000 status -o json: exit status 2 (160.862665ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-636000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-636000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-636000: (2.432857279s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.56s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (20.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-636000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-636000 --no-kubernetes --driver=hyperkit : (20.214942774s)
--- PASS: TestNoKubernetes/serial/Start (20.22s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-636000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-636000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (136.620609ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.14s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.52s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.52s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (2.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-636000
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-636000: (2.374038916s)
--- PASS: TestNoKubernetes/serial/Stop (2.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-636000 --driver=hyperkit 
E0429 13:31:26.515005   16301 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/18774-15855/.minikube/profiles/skaffold-330000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-636000 --driver=hyperkit : (19.358186784s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.14s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-636000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-636000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (137.134282ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.14s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (38.7s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-979000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-979000 --alsologtostderr -v=1 --driver=hyperkit : (38.688381671s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (38.70s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-979000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.17s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-979000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-979000 --output=json --layout=cluster: exit status 2 (169.700138ms)

                                                
                                                
-- stdout --
	{"Name":"pause-979000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-979000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.17s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.54s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-979000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.54s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.62s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-979000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.62s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.29s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-979000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-979000 --alsologtostderr -v=5: (5.285972461s)
--- PASS: TestPause/serial/DeletePaused (5.29s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.25s)

                                                
                                    

Test skip (18/223)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
Copied to clipboard