Test Report: Hyperkit_macOS 19313

                    
                      761b7fc65973460b6ca8311b028efa5f69b15d0b:2024-07-22:35453
                    
                

Test fail (8/343)

x
+
TestCertExpiration (417.63s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=3m --driver=hyperkit 
cert_options_test.go:123: (dbg) Done: out/minikube-darwin-amd64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=3m --driver=hyperkit : (37.784509405s)
cert_options_test.go:131: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=8760h --driver=hyperkit 
E0722 04:36:22.507796    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 04:36:39.075819    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
cert_options_test.go:131: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=8760h --driver=hyperkit : exit status 90 (1m13.507031416s)

                                                
                                                
-- stdout --
	* [cert-expiration-371000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-371000" primary control-plane node in "cert-expiration-371000" cluster
	* Updating the running hyperkit "cert-expiration-371000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 11:32:43 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:32:43 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:43.322897217Z" level=info msg="Starting up"
	Jul 22 11:32:43 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:43.323492105Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:32:43 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:43.324073372Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.341049798Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356794439Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356817586Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356853756Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356898236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356953843Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356983507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357113073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357148360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357160981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357168208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357225996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357371911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.358916055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.358976076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359113616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359158135Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359248762Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359314054Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362316010Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362422585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362469431Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362502974Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362541289Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362648330Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362917642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363022226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363060122Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363090167Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363124694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363157045Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363188883Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363222669Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363253687Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363284667Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363314405Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363345290Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363382154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363412973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363444445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363476995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363506765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363536663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363574639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363608382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363643479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363676224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363706026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363735555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363764954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363828368Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363871716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363903689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363932631Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364005239Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364048673Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364083817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364114365Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364142682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364171786Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364199808Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364380192Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364466843Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364550817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364661420Z" level=info msg="containerd successfully booted in 0.024289s"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.362731293Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.371910259Z" level=info msg="Loading containers: start."
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.456109809Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.555147082Z" level=info msg="Loading containers: done."
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.565961556Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.566146806Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.598133777Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:32:44 cert-expiration-371000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.598272993Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.679116249Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680243025Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680383169Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680476852Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680692650Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:32:45 cert-expiration-371000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:32:46 cert-expiration-371000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:32:46 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:32:46 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:32:46 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:46.716998963Z" level=info msg="Starting up"
	Jul 22 11:32:46 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:46.717405603Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:32:46 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:46.717986776Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=922
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.733305144Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748886663Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748934755Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748963573Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748973368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748995923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749005227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749112263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749146057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749157577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749165359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749181310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749259822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.750882698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.750920587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751022318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751088314Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751135334Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751151872Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751277492Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751318554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751330378Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751376982Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751390281Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751423941Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751620215Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751679579Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751690057Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751698277Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751707009Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751715432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751723322Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751734781Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751756918Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751774975Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751835808Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751846288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751859760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751869554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751878062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751886665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751897155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751905917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751914182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751922015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751930389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751939470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751946901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751954376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751963131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751972629Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751985512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751993598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752006399Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752064058Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752099831Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752110299Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752118997Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752125353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752133451Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752140494Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752270624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752327147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752380734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752393036Z" level=info msg="containerd successfully booted in 0.019439s"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.779187600Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.783289140Z" level=info msg="Loading containers: start."
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.853697434Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.917056868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.962462016Z" level=info msg="Loading containers: done."
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.972189510Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.972241479Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.992261218Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.992383110Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:32:47 cert-expiration-371000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:32:52 cert-expiration-371000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.334487375Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335280351Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335725715Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335787211Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335835743Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:32:53 cert-expiration-371000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:32:53 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:32:53 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:53.378795963Z" level=info msg="Starting up"
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:53.379416464Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:53.380013845Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1277
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.398154396Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413150759Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413253463Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413287903Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413298214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413318622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413327172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413440581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413475523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413487912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413495381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413553634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413638178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415216795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415256876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415377546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415417951Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415444278Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415460673Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415649707Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415695347Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415707920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415718198Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415727896Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415758043Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415963714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416260801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416413598Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416460259Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416499053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416567113Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416608039Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416645140Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416679124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416719129Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416753890Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416787087Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416827012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416868495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416904463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416939000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416970647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417006657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417040469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417216916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417306116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417413424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417459208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417491616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417581449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417628567Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417667750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417699640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417730343Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417800304Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417844047Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417934917Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417973244Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418005983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418037857Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418066959Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418300276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418436481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418620867Z" level=info msg="containerd successfully booted in 0.021136s"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.422975059Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.462125066Z" level=info msg="Loading containers: start."
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.540887898Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.609580799Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.654735495Z" level=info msg="Loading containers: done."
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.664478385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.664536821Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.686004344Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.686170919Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:32:54 cert-expiration-371000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666729921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666859841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666886673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666991816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671283844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671395509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671463482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671619179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693511149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693755129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693780949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693914087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698164471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698314388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698346178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698485089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.853776336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.854028415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.854226121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.855183798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893462893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893677164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893780385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893929246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896000349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896168192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896304132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896506004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.897720119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.898591534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.898795377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.898914838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.992697696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.992887996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.992988376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.993945882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013369995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013453432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013569461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013637798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.109580127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.109624213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.111980705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.112534350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.149181846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.149942878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.149982036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.150124691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187422135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187500587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187513164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187624942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.431879591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.432069689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.432097896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.432179926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:33:51.304160418Z" level=info msg="ignoring event" container=cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:51.305030242Z" level=info msg="shim disconnected" id=cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a namespace=moby
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:51.305266617Z" level=warning msg="cleaning up after shim disconnected" id=cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a namespace=moby
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:51.305309676Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001231815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001276275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001287778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001529301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:36:10 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:10.949305583Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:36:10 cert-expiration-371000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.055519397Z" level=info msg="ignoring event" container=6cd74977b72c47babb3642056e1e003202b927d2b85a9b0a288c27a2f56e2af0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.055632768Z" level=info msg="shim disconnected" id=6cd74977b72c47babb3642056e1e003202b927d2b85a9b0a288c27a2f56e2af0 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.055684998Z" level=warning msg="cleaning up after shim disconnected" id=6cd74977b72c47babb3642056e1e003202b927d2b85a9b0a288c27a2f56e2af0 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.055694275Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.060155366Z" level=info msg="ignoring event" container=aee4a4beeddedd8804681224cbfc20d35f29944e81f87fe058c3f98b7e818836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.060613821Z" level=info msg="shim disconnected" id=aee4a4beeddedd8804681224cbfc20d35f29944e81f87fe058c3f98b7e818836 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.060665015Z" level=warning msg="cleaning up after shim disconnected" id=aee4a4beeddedd8804681224cbfc20d35f29944e81f87fe058c3f98b7e818836 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.060673512Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.071258735Z" level=info msg="shim disconnected" id=b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.071321680Z" level=warning msg="cleaning up after shim disconnected" id=b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.071330162Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.071822404Z" level=info msg="ignoring event" container=b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.083858512Z" level=info msg="ignoring event" container=9ab0d9f14597af327f3d3e990acbb781687233a04ac2ab428f68d5719cb48a44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.084042816Z" level=info msg="shim disconnected" id=9ab0d9f14597af327f3d3e990acbb781687233a04ac2ab428f68d5719cb48a44 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.084240546Z" level=warning msg="cleaning up after shim disconnected" id=9ab0d9f14597af327f3d3e990acbb781687233a04ac2ab428f68d5719cb48a44 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.084538383Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101564761Z" level=info msg="ignoring event" container=c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101611543Z" level=info msg="ignoring event" container=8c4b1ea747198c6eba326546f20cf1831b1163c264b24dd74367cff7dae94a61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101634153Z" level=info msg="ignoring event" container=87a58d6a9a40d07fbf432db4235d6334a54e6eb3636aa010abeb83abfa060d90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101645157Z" level=info msg="ignoring event" container=4dffe484555ddf8f830dc7921536167194cf21f0c477336565fc99ded54bcbec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101658190Z" level=info msg="ignoring event" container=6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.102863333Z" level=info msg="shim disconnected" id=6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.102938501Z" level=warning msg="cleaning up after shim disconnected" id=6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.102969371Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.103423978Z" level=info msg="shim disconnected" id=c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.103495279Z" level=warning msg="cleaning up after shim disconnected" id=c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.103504099Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.106924516Z" level=info msg="shim disconnected" id=4dffe484555ddf8f830dc7921536167194cf21f0c477336565fc99ded54bcbec namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.106989796Z" level=warning msg="cleaning up after shim disconnected" id=4dffe484555ddf8f830dc7921536167194cf21f0c477336565fc99ded54bcbec namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.107020943Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.113957654Z" level=info msg="shim disconnected" id=4f2c8e5cf3a62fe16b8fb7eeb3a3bae53a1289352d5666d477ab5834080f9eb1 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114007541Z" level=warning msg="cleaning up after shim disconnected" id=4f2c8e5cf3a62fe16b8fb7eeb3a3bae53a1289352d5666d477ab5834080f9eb1 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114016088Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114379509Z" level=info msg="shim disconnected" id=8c4b1ea747198c6eba326546f20cf1831b1163c264b24dd74367cff7dae94a61 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114435886Z" level=warning msg="cleaning up after shim disconnected" id=8c4b1ea747198c6eba326546f20cf1831b1163c264b24dd74367cff7dae94a61 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114464669Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.121911240Z" level=info msg="shim disconnected" id=54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.121960224Z" level=warning msg="cleaning up after shim disconnected" id=54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.121968604Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.122120128Z" level=info msg="ignoring event" container=54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.122165975Z" level=info msg="ignoring event" container=4f2c8e5cf3a62fe16b8fb7eeb3a3bae53a1289352d5666d477ab5834080f9eb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.122900962Z" level=info msg="shim disconnected" id=87a58d6a9a40d07fbf432db4235d6334a54e6eb3636aa010abeb83abfa060d90 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.122943023Z" level=warning msg="cleaning up after shim disconnected" id=87a58d6a9a40d07fbf432db4235d6334a54e6eb3636aa010abeb83abfa060d90 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.122951642Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.128985604Z" level=info msg="ignoring event" container=ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.135347133Z" level=info msg="shim disconnected" id=ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.135399796Z" level=warning msg="cleaning up after shim disconnected" id=ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.135408738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.137258688Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.149775083Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.154931313Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.165018049Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.188423719Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:15.987849601Z" level=info msg="shim disconnected" id=6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:15.991210759Z" level=warning msg="cleaning up after shim disconnected" id=6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:15.991257574Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:15.993985949Z" level=info msg="ignoring event" container=6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:20.976155260Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:20.999099298Z" level=info msg="ignoring event" container=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:20.999005911Z" level=info msg="shim disconnected" id=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538 namespace=moby
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:20.999507624Z" level=warning msg="cleaning up after shim disconnected" id=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538 namespace=moby
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:20.999618875Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.019668441Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.020089991Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.020153536Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.020190386Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: docker.service: Consumed 3.263s CPU time.
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:36:22 cert-expiration-371000 dockerd[3833]: time="2024-07-22T11:36:22.054447432Z" level=info msg="Starting up"
	Jul 22 11:37:22 cert-expiration-371000 dockerd[3833]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 11:37:22 cert-expiration-371000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 11:37:22 cert-expiration-371000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 11:37:22 cert-expiration-371000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:133: failed to start minikube after cert expiration: "out/minikube-darwin-amd64 start -p cert-expiration-371000 --memory=2048 --cert-expiration=8760h --driver=hyperkit " : exit status 90
cert_options_test.go:136: minikube start output did not warn about expired certs: 
-- stdout --
	* [cert-expiration-371000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "cert-expiration-371000" primary control-plane node in "cert-expiration-371000" cluster
	* Updating the running hyperkit "cert-expiration-371000" VM ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 11:32:43 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:32:43 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:43.322897217Z" level=info msg="Starting up"
	Jul 22 11:32:43 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:43.323492105Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:32:43 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:43.324073372Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.341049798Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356794439Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356817586Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356853756Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356898236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356953843Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356983507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357113073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357148360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357160981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357168208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357225996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357371911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.358916055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.358976076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359113616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359158135Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359248762Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359314054Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362316010Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362422585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362469431Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362502974Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362541289Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362648330Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362917642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363022226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363060122Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363090167Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363124694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363157045Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363188883Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363222669Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363253687Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363284667Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363314405Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363345290Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363382154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363412973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363444445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363476995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363506765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363536663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363574639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363608382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363643479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363676224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363706026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363735555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363764954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363828368Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363871716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363903689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363932631Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364005239Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364048673Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364083817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364114365Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364142682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364171786Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364199808Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364380192Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364466843Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364550817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364661420Z" level=info msg="containerd successfully booted in 0.024289s"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.362731293Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.371910259Z" level=info msg="Loading containers: start."
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.456109809Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.555147082Z" level=info msg="Loading containers: done."
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.565961556Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.566146806Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.598133777Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:32:44 cert-expiration-371000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.598272993Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.679116249Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680243025Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680383169Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680476852Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680692650Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:32:45 cert-expiration-371000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:32:46 cert-expiration-371000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:32:46 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:32:46 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:32:46 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:46.716998963Z" level=info msg="Starting up"
	Jul 22 11:32:46 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:46.717405603Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:32:46 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:46.717986776Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=922
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.733305144Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748886663Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748934755Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748963573Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748973368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748995923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749005227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749112263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749146057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749157577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749165359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749181310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749259822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.750882698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.750920587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751022318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751088314Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751135334Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751151872Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751277492Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751318554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751330378Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751376982Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751390281Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751423941Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751620215Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751679579Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751690057Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751698277Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751707009Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751715432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751723322Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751734781Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751756918Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751774975Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751835808Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751846288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751859760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751869554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751878062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751886665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751897155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751905917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751914182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751922015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751930389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751939470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751946901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751954376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751963131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751972629Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751985512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751993598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752006399Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752064058Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752099831Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752110299Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752118997Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752125353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752133451Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752140494Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752270624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752327147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752380734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752393036Z" level=info msg="containerd successfully booted in 0.019439s"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.779187600Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.783289140Z" level=info msg="Loading containers: start."
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.853697434Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.917056868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.962462016Z" level=info msg="Loading containers: done."
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.972189510Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.972241479Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.992261218Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.992383110Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:32:47 cert-expiration-371000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:32:52 cert-expiration-371000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.334487375Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335280351Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335725715Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335787211Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335835743Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:32:53 cert-expiration-371000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:32:53 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:32:53 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:53.378795963Z" level=info msg="Starting up"
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:53.379416464Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:53.380013845Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1277
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.398154396Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413150759Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413253463Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413287903Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413298214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413318622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413327172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413440581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413475523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413487912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413495381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413553634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413638178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415216795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415256876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415377546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415417951Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415444278Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415460673Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415649707Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415695347Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415707920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415718198Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415727896Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415758043Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415963714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416260801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416413598Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416460259Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416499053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416567113Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416608039Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416645140Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416679124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416719129Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416753890Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416787087Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416827012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416868495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416904463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416939000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416970647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417006657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417040469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417216916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417306116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417413424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417459208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417491616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417581449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417628567Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417667750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417699640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417730343Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417800304Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417844047Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417934917Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417973244Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418005983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418037857Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418066959Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418300276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418436481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418620867Z" level=info msg="containerd successfully booted in 0.021136s"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.422975059Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.462125066Z" level=info msg="Loading containers: start."
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.540887898Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.609580799Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.654735495Z" level=info msg="Loading containers: done."
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.664478385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.664536821Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.686004344Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.686170919Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:32:54 cert-expiration-371000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666729921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666859841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666886673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666991816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671283844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671395509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671463482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671619179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693511149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693755129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693780949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693914087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698164471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698314388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698346178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698485089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.853776336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.854028415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.854226121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.855183798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893462893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893677164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893780385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893929246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896000349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896168192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896304132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896506004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.897720119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.898591534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.898795377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.898914838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.992697696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.992887996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.992988376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.993945882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013369995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013453432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013569461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013637798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.109580127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.109624213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.111980705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.112534350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.149181846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.149942878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.149982036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.150124691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187422135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187500587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187513164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187624942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.431879591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.432069689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.432097896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.432179926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:33:51.304160418Z" level=info msg="ignoring event" container=cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:51.305030242Z" level=info msg="shim disconnected" id=cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a namespace=moby
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:51.305266617Z" level=warning msg="cleaning up after shim disconnected" id=cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a namespace=moby
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:51.305309676Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001231815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001276275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001287778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001529301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:36:10 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:10.949305583Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:36:10 cert-expiration-371000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.055519397Z" level=info msg="ignoring event" container=6cd74977b72c47babb3642056e1e003202b927d2b85a9b0a288c27a2f56e2af0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.055632768Z" level=info msg="shim disconnected" id=6cd74977b72c47babb3642056e1e003202b927d2b85a9b0a288c27a2f56e2af0 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.055684998Z" level=warning msg="cleaning up after shim disconnected" id=6cd74977b72c47babb3642056e1e003202b927d2b85a9b0a288c27a2f56e2af0 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.055694275Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.060155366Z" level=info msg="ignoring event" container=aee4a4beeddedd8804681224cbfc20d35f29944e81f87fe058c3f98b7e818836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.060613821Z" level=info msg="shim disconnected" id=aee4a4beeddedd8804681224cbfc20d35f29944e81f87fe058c3f98b7e818836 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.060665015Z" level=warning msg="cleaning up after shim disconnected" id=aee4a4beeddedd8804681224cbfc20d35f29944e81f87fe058c3f98b7e818836 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.060673512Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.071258735Z" level=info msg="shim disconnected" id=b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.071321680Z" level=warning msg="cleaning up after shim disconnected" id=b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.071330162Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.071822404Z" level=info msg="ignoring event" container=b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.083858512Z" level=info msg="ignoring event" container=9ab0d9f14597af327f3d3e990acbb781687233a04ac2ab428f68d5719cb48a44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.084042816Z" level=info msg="shim disconnected" id=9ab0d9f14597af327f3d3e990acbb781687233a04ac2ab428f68d5719cb48a44 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.084240546Z" level=warning msg="cleaning up after shim disconnected" id=9ab0d9f14597af327f3d3e990acbb781687233a04ac2ab428f68d5719cb48a44 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.084538383Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101564761Z" level=info msg="ignoring event" container=c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101611543Z" level=info msg="ignoring event" container=8c4b1ea747198c6eba326546f20cf1831b1163c264b24dd74367cff7dae94a61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101634153Z" level=info msg="ignoring event" container=87a58d6a9a40d07fbf432db4235d6334a54e6eb3636aa010abeb83abfa060d90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101645157Z" level=info msg="ignoring event" container=4dffe484555ddf8f830dc7921536167194cf21f0c477336565fc99ded54bcbec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101658190Z" level=info msg="ignoring event" container=6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.102863333Z" level=info msg="shim disconnected" id=6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.102938501Z" level=warning msg="cleaning up after shim disconnected" id=6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.102969371Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.103423978Z" level=info msg="shim disconnected" id=c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.103495279Z" level=warning msg="cleaning up after shim disconnected" id=c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.103504099Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.106924516Z" level=info msg="shim disconnected" id=4dffe484555ddf8f830dc7921536167194cf21f0c477336565fc99ded54bcbec namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.106989796Z" level=warning msg="cleaning up after shim disconnected" id=4dffe484555ddf8f830dc7921536167194cf21f0c477336565fc99ded54bcbec namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.107020943Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.113957654Z" level=info msg="shim disconnected" id=4f2c8e5cf3a62fe16b8fb7eeb3a3bae53a1289352d5666d477ab5834080f9eb1 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114007541Z" level=warning msg="cleaning up after shim disconnected" id=4f2c8e5cf3a62fe16b8fb7eeb3a3bae53a1289352d5666d477ab5834080f9eb1 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114016088Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114379509Z" level=info msg="shim disconnected" id=8c4b1ea747198c6eba326546f20cf1831b1163c264b24dd74367cff7dae94a61 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114435886Z" level=warning msg="cleaning up after shim disconnected" id=8c4b1ea747198c6eba326546f20cf1831b1163c264b24dd74367cff7dae94a61 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114464669Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.121911240Z" level=info msg="shim disconnected" id=54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.121960224Z" level=warning msg="cleaning up after shim disconnected" id=54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.121968604Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.122120128Z" level=info msg="ignoring event" container=54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.122165975Z" level=info msg="ignoring event" container=4f2c8e5cf3a62fe16b8fb7eeb3a3bae53a1289352d5666d477ab5834080f9eb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.122900962Z" level=info msg="shim disconnected" id=87a58d6a9a40d07fbf432db4235d6334a54e6eb3636aa010abeb83abfa060d90 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.122943023Z" level=warning msg="cleaning up after shim disconnected" id=87a58d6a9a40d07fbf432db4235d6334a54e6eb3636aa010abeb83abfa060d90 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.122951642Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.128985604Z" level=info msg="ignoring event" container=ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.135347133Z" level=info msg="shim disconnected" id=ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.135399796Z" level=warning msg="cleaning up after shim disconnected" id=ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.135408738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.137258688Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.149775083Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.154931313Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.165018049Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.188423719Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:15.987849601Z" level=info msg="shim disconnected" id=6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:15.991210759Z" level=warning msg="cleaning up after shim disconnected" id=6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:15.991257574Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:15.993985949Z" level=info msg="ignoring event" container=6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:20.976155260Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:20.999099298Z" level=info msg="ignoring event" container=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:20.999005911Z" level=info msg="shim disconnected" id=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538 namespace=moby
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:20.999507624Z" level=warning msg="cleaning up after shim disconnected" id=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538 namespace=moby
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:20.999618875Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.019668441Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.020089991Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.020153536Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.020190386Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: docker.service: Consumed 3.263s CPU time.
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:36:22 cert-expiration-371000 dockerd[3833]: time="2024-07-22T11:36:22.054447432Z" level=info msg="Starting up"
	Jul 22 11:37:22 cert-expiration-371000 dockerd[3833]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 11:37:22 cert-expiration-371000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 11:37:22 cert-expiration-371000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 11:37:22 cert-expiration-371000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
cert_options_test.go:138: *** TestCertExpiration FAILED at 2024-07-22 04:37:22.102424 -0700 PDT m=+4141.436775467
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-371000 -n cert-expiration-371000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p cert-expiration-371000 -n cert-expiration-371000: exit status 2 (158.112719ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestCertExpiration FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestCertExpiration]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p cert-expiration-371000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p cert-expiration-371000 logs -n 25: (2m0.688904422s)
helpers_test.go:252: TestCertExpiration logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-533000 sudo           | NoKubernetes-533000       | jenkins | v1.33.1 | 22 Jul 24 04:31 PDT |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-533000                | NoKubernetes-533000       | jenkins | v1.33.1 | 22 Jul 24 04:31 PDT | 22 Jul 24 04:31 PDT |
	| start   | -p NoKubernetes-533000                | NoKubernetes-533000       | jenkins | v1.33.1 | 22 Jul 24 04:31 PDT | 22 Jul 24 04:31 PDT |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | force-systemd-env-128000              | force-systemd-env-128000  | jenkins | v1.33.1 | 22 Jul 24 04:31 PDT | 22 Jul 24 04:31 PDT |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-128000           | force-systemd-env-128000  | jenkins | v1.33.1 | 22 Jul 24 04:31 PDT | 22 Jul 24 04:31 PDT |
	| ssh     | -p NoKubernetes-533000 sudo           | NoKubernetes-533000       | jenkins | v1.33.1 | 22 Jul 24 04:31 PDT |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-533000                | NoKubernetes-533000       | jenkins | v1.33.1 | 22 Jul 24 04:31 PDT | 22 Jul 24 04:31 PDT |
	| start   | -p force-systemd-flag-826000          | force-systemd-flag-826000 | jenkins | v1.33.1 | 22 Jul 24 04:31 PDT | 22 Jul 24 04:32 PDT |
	|         | --memory=2048 --force-systemd         |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p docker-flags-856000                | docker-flags-856000       | jenkins | v1.33.1 | 22 Jul 24 04:31 PDT | 22 Jul 24 04:32 PDT |
	|         | --cache-images=false                  |                           |         |         |                     |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --install-addons=false                |                           |         |         |                     |                     |
	|         | --wait=false                          |                           |         |         |                     |                     |
	|         | --docker-env=FOO=BAR                  |                           |         |         |                     |                     |
	|         | --docker-env=BAZ=BAT                  |                           |         |         |                     |                     |
	|         | --docker-opt=debug                    |                           |         |         |                     |                     |
	|         | --docker-opt=icc=true                 |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=5                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-826000             | force-systemd-flag-826000 | jenkins | v1.33.1 | 22 Jul 24 04:32 PDT | 22 Jul 24 04:32 PDT |
	|         | ssh docker info --format              |                           |         |         |                     |                     |
	|         | {{.CgroupDriver}}                     |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-826000          | force-systemd-flag-826000 | jenkins | v1.33.1 | 22 Jul 24 04:32 PDT | 22 Jul 24 04:32 PDT |
	| start   | -p cert-expiration-371000             | cert-expiration-371000    | jenkins | v1.33.1 | 22 Jul 24 04:32 PDT | 22 Jul 24 04:33 PDT |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | docker-flags-856000 ssh               | docker-flags-856000       | jenkins | v1.33.1 | 22 Jul 24 04:32 PDT | 22 Jul 24 04:32 PDT |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=Environment                |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| ssh     | docker-flags-856000 ssh               | docker-flags-856000       | jenkins | v1.33.1 | 22 Jul 24 04:32 PDT | 22 Jul 24 04:32 PDT |
	|         | sudo systemctl show docker            |                           |         |         |                     |                     |
	|         | --property=ExecStart                  |                           |         |         |                     |                     |
	|         | --no-pager                            |                           |         |         |                     |                     |
	| delete  | -p docker-flags-856000                | docker-flags-856000       | jenkins | v1.33.1 | 22 Jul 24 04:32 PDT | 22 Jul 24 04:32 PDT |
	| start   | -p cert-options-926000                | cert-options-926000       | jenkins | v1.33.1 | 22 Jul 24 04:32 PDT | 22 Jul 24 04:33 PDT |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| ssh     | cert-options-926000 ssh               | cert-options-926000       | jenkins | v1.33.1 | 22 Jul 24 04:33 PDT | 22 Jul 24 04:33 PDT |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-926000 -- sudo        | cert-options-926000       | jenkins | v1.33.1 | 22 Jul 24 04:33 PDT | 22 Jul 24 04:33 PDT |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-926000                | cert-options-926000       | jenkins | v1.33.1 | 22 Jul 24 04:33 PDT | 22 Jul 24 04:33 PDT |
	| start   | -p kubernetes-upgrade-759000          | kubernetes-upgrade-759000 | jenkins | v1.33.1 | 22 Jul 24 04:33 PDT | 22 Jul 24 04:34 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-759000          | kubernetes-upgrade-759000 | jenkins | v1.33.1 | 22 Jul 24 04:34 PDT | 22 Jul 24 04:34 PDT |
	| start   | -p kubernetes-upgrade-759000          | kubernetes-upgrade-759000 | jenkins | v1.33.1 | 22 Jul 24 04:34 PDT | 22 Jul 24 04:36 PDT |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p cert-expiration-371000             | cert-expiration-371000    | jenkins | v1.33.1 | 22 Jul 24 04:36 PDT |                     |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=8760h               |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-759000          | kubernetes-upgrade-759000 | jenkins | v1.33.1 | 22 Jul 24 04:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-759000          | kubernetes-upgrade-759000 | jenkins | v1.33.1 | 22 Jul 24 04:36 PDT |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0   |                           |         |         |                     |                     |
	|         | --alsologtostderr -v=1                |                           |         |         |                     |                     |
	|         | --driver=hyperkit                     |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 04:36:59
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 04:36:59.138513    6653 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:36:59.138679    6653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:36:59.138684    6653 out.go:304] Setting ErrFile to fd 2...
	I0722 04:36:59.138688    6653 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:36:59.138867    6653 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 04:36:59.140186    6653 out.go:298] Setting JSON to false
	I0722 04:36:59.162817    6653 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":5788,"bootTime":1721642431,"procs":436,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 04:36:59.162915    6653 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:36:59.184504    6653 out.go:177] * [kubernetes-upgrade-759000] minikube v1.33.1 on Darwin 14.5
	I0722 04:36:59.242417    6653 notify.go:220] Checking for updates...
	I0722 04:36:59.262842    6653 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:36:59.284120    6653 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 04:36:59.325930    6653 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 04:36:59.368335    6653 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:36:59.388963    6653 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	I0722 04:36:59.410231    6653 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:36:59.431871    6653 config.go:182] Loaded profile config "kubernetes-upgrade-759000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0722 04:36:59.432546    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:36:59.432624    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:36:59.442726    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54675
	I0722 04:36:59.443169    6653 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:36:59.443578    6653 main.go:141] libmachine: Using API Version  1
	I0722 04:36:59.443590    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:36:59.443828    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:36:59.443964    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:36:59.444180    6653 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:36:59.444432    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:36:59.444452    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:36:59.452924    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54677
	I0722 04:36:59.453276    6653 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:36:59.453588    6653 main.go:141] libmachine: Using API Version  1
	I0722 04:36:59.453597    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:36:59.453789    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:36:59.453906    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:36:59.482271    6653 out.go:177] * Using the hyperkit driver based on existing profile
	I0722 04:36:59.524215    6653 start.go:297] selected driver: hyperkit
	I0722 04:36:59.524243    6653 start.go:901] validating driver "hyperkit" against &{Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 Kuberne
tesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mount
Options:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:36:59.524467    6653 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:36:59.528727    6653 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:36:59.528821    6653 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19313-1111/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0722 04:36:59.537027    6653 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0722 04:36:59.540790    6653 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:36:59.540811    6653 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0722 04:36:59.540936    6653 cni.go:84] Creating CNI manager for ""
	I0722 04:36:59.540951    6653 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:36:59.540990    6653 start.go:340] cluster config:
	{Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-759000
Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:36:59.541087    6653 iso.go:125] acquiring lock: {Name:mk28fa3b914b659bb36b0449a0ad3ab1345dae32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:36:59.583112    6653 out.go:177] * Starting "kubernetes-upgrade-759000" primary control-plane node in "kubernetes-upgrade-759000" cluster
	I0722 04:36:59.603895    6653 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 04:36:59.603949    6653 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0722 04:36:59.603973    6653 cache.go:56] Caching tarball of preloaded images
	I0722 04:36:59.604113    6653 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 04:36:59.604128    6653 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0722 04:36:59.604240    6653 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/config.json ...
	I0722 04:36:59.604731    6653 start.go:360] acquireMachinesLock for kubernetes-upgrade-759000: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:36:59.604787    6653 start.go:364] duration metric: took 42.648µs to acquireMachinesLock for "kubernetes-upgrade-759000"
	I0722 04:36:59.604804    6653 start.go:96] Skipping create...Using existing machine configuration
	I0722 04:36:59.604816    6653 fix.go:54] fixHost starting: 
	I0722 04:36:59.605048    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:36:59.605071    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:36:59.613510    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54679
	I0722 04:36:59.613861    6653 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:36:59.614254    6653 main.go:141] libmachine: Using API Version  1
	I0722 04:36:59.614270    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:36:59.614482    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:36:59.614610    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:36:59.614701    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetState
	I0722 04:36:59.614788    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:36:59.614897    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) DBG | hyperkit pid from json: 6608
	I0722 04:36:59.615839    6653 fix.go:112] recreateIfNeeded on kubernetes-upgrade-759000: state=Running err=<nil>
	W0722 04:36:59.615858    6653 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 04:36:59.690108    6653 out.go:177] * Updating the running hyperkit "kubernetes-upgrade-759000" VM ...
	I0722 04:36:59.731020    6653 machine.go:94] provisionDockerMachine start ...
	I0722 04:36:59.731057    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:36:59.731359    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:36:59.731572    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:36:59.731757    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:36:59.731937    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:36:59.732120    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:36:59.732346    6653 main.go:141] libmachine: Using SSH client type: native
	I0722 04:36:59.732709    6653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x70910c0] 0x7093e20 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0722 04:36:59.732724    6653 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 04:36:59.792441    6653 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-759000
	
	I0722 04:36:59.792457    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetMachineName
	I0722 04:36:59.792592    6653 buildroot.go:166] provisioning hostname "kubernetes-upgrade-759000"
	I0722 04:36:59.792605    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetMachineName
	I0722 04:36:59.792719    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:36:59.792828    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:36:59.792952    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:36:59.793040    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:36:59.793143    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:36:59.793265    6653 main.go:141] libmachine: Using SSH client type: native
	I0722 04:36:59.793402    6653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x70910c0] 0x7093e20 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0722 04:36:59.793411    6653 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-759000 && echo "kubernetes-upgrade-759000" | sudo tee /etc/hostname
	I0722 04:36:59.866337    6653 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-759000
	
	I0722 04:36:59.866358    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:36:59.866487    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:36:59.866575    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:36:59.866672    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:36:59.866754    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:36:59.866896    6653 main.go:141] libmachine: Using SSH client type: native
	I0722 04:36:59.867067    6653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x70910c0] 0x7093e20 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0722 04:36:59.867079    6653 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-759000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-759000/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-759000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 04:36:59.930482    6653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 04:36:59.930515    6653 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 04:36:59.930532    6653 buildroot.go:174] setting up certificates
	I0722 04:36:59.930546    6653 provision.go:84] configureAuth start
	I0722 04:36:59.930556    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetMachineName
	I0722 04:36:59.930690    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetIP
	I0722 04:36:59.930785    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:36:59.930876    6653 provision.go:143] copyHostCerts
	I0722 04:36:59.930963    6653 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 04:36:59.930973    6653 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 04:36:59.931127    6653 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 04:36:59.931363    6653 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 04:36:59.931369    6653 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 04:36:59.931447    6653 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 04:36:59.931610    6653 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 04:36:59.931616    6653 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 04:36:59.931697    6653 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 04:36:59.931849    6653 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-759000 san=[127.0.0.1 192.169.0.33 kubernetes-upgrade-759000 localhost minikube]
	I0722 04:37:00.139269    6653 provision.go:177] copyRemoteCerts
	I0722 04:37:00.139324    6653 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 04:37:00.139342    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:37:00.139491    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:37:00.139592    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.139695    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:37:00.139792    6653 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0722 04:37:00.177277    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 04:37:00.196909    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0722 04:37:00.218402    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 04:37:00.238035    6653 provision.go:87] duration metric: took 307.479462ms to configureAuth
	I0722 04:37:00.238048    6653 buildroot.go:189] setting minikube options for container-runtime
	I0722 04:37:00.238191    6653 config.go:182] Loaded profile config "kubernetes-upgrade-759000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0722 04:37:00.238205    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:37:00.238344    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:37:00.238426    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:37:00.238526    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.238596    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.238672    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:37:00.238773    6653 main.go:141] libmachine: Using SSH client type: native
	I0722 04:37:00.238899    6653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x70910c0] 0x7093e20 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0722 04:37:00.238907    6653 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 04:37:00.302142    6653 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 04:37:00.302153    6653 buildroot.go:70] root file system type: tmpfs
	I0722 04:37:00.302229    6653 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 04:37:00.302243    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:37:00.302390    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:37:00.302497    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.302584    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.302671    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:37:00.302805    6653 main.go:141] libmachine: Using SSH client type: native
	I0722 04:37:00.302960    6653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x70910c0] 0x7093e20 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0722 04:37:00.303005    6653 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 04:37:00.374346    6653 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 04:37:00.374372    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:37:00.374508    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:37:00.374596    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.374692    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.374768    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:37:00.374886    6653 main.go:141] libmachine: Using SSH client type: native
	I0722 04:37:00.375056    6653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x70910c0] 0x7093e20 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0722 04:37:00.375073    6653 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 04:37:00.437899    6653 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 04:37:00.437911    6653 machine.go:97] duration metric: took 706.883504ms to provisionDockerMachine
	I0722 04:37:00.437944    6653 start.go:293] postStartSetup for "kubernetes-upgrade-759000" (driver="hyperkit")
	I0722 04:37:00.437954    6653 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 04:37:00.437969    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:37:00.438145    6653 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 04:37:00.438159    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:37:00.438243    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:37:00.438332    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.438437    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:37:00.438535    6653 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0722 04:37:00.475257    6653 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 04:37:00.478520    6653 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 04:37:00.478535    6653 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 04:37:00.478621    6653 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 04:37:00.478764    6653 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 04:37:00.478926    6653 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 04:37:00.486134    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 04:37:00.508797    6653 start.go:296] duration metric: took 70.843463ms for postStartSetup
	I0722 04:37:00.508826    6653 fix.go:56] duration metric: took 904.030438ms for fixHost
	I0722 04:37:00.508842    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:37:00.508977    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:37:00.509079    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.509164    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.509231    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:37:00.509344    6653 main.go:141] libmachine: Using SSH client type: native
	I0722 04:37:00.509494    6653 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x70910c0] 0x7093e20 <nil>  [] 0s} 192.169.0.33 22 <nil> <nil>}
	I0722 04:37:00.509502    6653 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 04:37:00.569539    6653 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721648220.768330339
	
	I0722 04:37:00.569550    6653 fix.go:216] guest clock: 1721648220.768330339
	I0722 04:37:00.569555    6653 fix.go:229] Guest: 2024-07-22 04:37:00.768330339 -0700 PDT Remote: 2024-07-22 04:37:00.508831 -0700 PDT m=+1.405440787 (delta=259.499339ms)
	I0722 04:37:00.569579    6653 fix.go:200] guest clock delta is within tolerance: 259.499339ms
	I0722 04:37:00.569583    6653 start.go:83] releasing machines lock for "kubernetes-upgrade-759000", held for 964.806332ms
	I0722 04:37:00.569602    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:37:00.569725    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetIP
	I0722 04:37:00.569829    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:37:00.570134    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:37:00.570240    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:37:00.570332    6653 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 04:37:00.570369    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:37:00.570374    6653 ssh_runner.go:195] Run: cat /version.json
	I0722 04:37:00.570383    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:37:00.570478    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:37:00.570495    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:37:00.570606    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.570611    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:00.570707    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:37:00.570720    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:37:00.570797    6653 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0722 04:37:00.570826    6653 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0722 04:37:00.651588    6653 ssh_runner.go:195] Run: systemctl --version
	I0722 04:37:00.657781    6653 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 04:37:00.662064    6653 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 04:37:00.662108    6653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *bridge* -not -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e '/"dst": ".*:.*"/d' -e 's|^(.*)"dst": (.*)[,*]$|\1"dst": \2|g' -e '/"subnet": ".*:.*"/d' -e 's|^(.*)"subnet": ".*"(.*)[,*]$|\1"subnet": "10.244.0.0/16"\2|g' {}" ;
	I0722 04:37:00.670079    6653 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f -name *podman* -not -name *.mk_disabled -printf "%!p(MISSING), " -exec sh -c "sudo sed -i -r -e 's|^(.*)"subnet": ".*"(.*)$|\1"subnet": "10.244.0.0/16"\2|g' -e 's|^(.*)"gateway": ".*"(.*)$|\1"gateway": "10.244.0.1"\2|g' {}" ;
	I0722 04:37:00.683059    6653 cni.go:308] configured [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 04:37:00.683072    6653 start.go:495] detecting cgroup driver to use...
	I0722 04:37:00.683173    6653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:37:00.698944    6653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.10"|' /etc/containerd/config.toml"
	I0722 04:37:00.711289    6653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 04:37:00.720612    6653 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 04:37:00.720655    6653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 04:37:00.730809    6653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:37:00.740463    6653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 04:37:00.749470    6653 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:37:00.758912    6653 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 04:37:00.769627    6653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 04:37:00.778665    6653 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 04:37:00.787713    6653 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 04:37:00.797073    6653 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 04:37:00.806414    6653 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 04:37:00.815565    6653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:37:00.951188    6653 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 04:37:00.969874    6653 start.go:495] detecting cgroup driver to use...
	I0722 04:37:00.969956    6653 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 04:37:00.985052    6653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:37:00.998218    6653 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 04:37:01.018468    6653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:37:01.030891    6653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 04:37:01.042099    6653 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:37:01.060029    6653 ssh_runner.go:195] Run: which cri-dockerd
	I0722 04:37:01.062905    6653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 04:37:01.071036    6653 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (190 bytes)
	I0722 04:37:01.084825    6653 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 04:37:01.226162    6653 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 04:37:01.348978    6653 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 04:37:01.349069    6653 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 04:37:01.365018    6653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:37:01.515946    6653 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 04:37:13.947823    6653 ssh_runner.go:235] Completed: sudo systemctl restart docker: (12.432066265s)
	I0722 04:37:13.947889    6653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 04:37:13.964413    6653 ssh_runner.go:195] Run: sudo systemctl stop cri-docker.socket
	I0722 04:37:13.994428    6653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 04:37:14.004973    6653 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 04:37:14.102880    6653 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 04:37:14.204146    6653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:37:14.323205    6653 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 04:37:14.337051    6653 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 04:37:14.347350    6653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:37:14.439475    6653 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 04:37:14.510758    6653 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 04:37:14.510843    6653 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 04:37:14.515198    6653 start.go:563] Will wait 60s for crictl version
	I0722 04:37:14.515251    6653 ssh_runner.go:195] Run: which crictl
	I0722 04:37:14.518134    6653 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 04:37:14.543586    6653 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 04:37:14.543657    6653 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 04:37:14.560786    6653 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 04:37:14.620332    6653 out.go:204] * Preparing Kubernetes v1.31.0-beta.0 on Docker 27.0.3 ...
	I0722 04:37:14.620388    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetIP
	I0722 04:37:14.620882    6653 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0722 04:37:14.625704    6653 kubeadm.go:883] updating cluster {Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernet
esVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 04:37:14.625776    6653 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 04:37:14.625829    6653 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 04:37:14.643873    6653 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	registry.k8s.io/kube-proxy:v1.31.0-beta.0
	registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	registry.k8s.io/etcd:3.5.14-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0722 04:37:14.643884    6653 docker.go:615] Images already preloaded, skipping extraction
	I0722 04:37:14.643955    6653 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 04:37:14.658634    6653 docker.go:685] Got preloaded images: -- stdout --
	registry.k8s.io/kube-apiserver:v1.31.0-beta.0
	registry.k8s.io/kube-scheduler:v1.31.0-beta.0
	registry.k8s.io/kube-proxy:v1.31.0-beta.0
	registry.k8s.io/kube-controller-manager:v1.31.0-beta.0
	registry.k8s.io/etcd:3.5.14-0
	registry.k8s.io/pause:3.10
	registry.k8s.io/coredns/coredns:v1.11.1
	gcr.io/k8s-minikube/storage-provisioner:v5
	k8s.gcr.io/kube-proxy:v1.20.0
	k8s.gcr.io/kube-controller-manager:v1.20.0
	k8s.gcr.io/kube-scheduler:v1.20.0
	k8s.gcr.io/kube-apiserver:v1.20.0
	k8s.gcr.io/etcd:3.4.13-0
	k8s.gcr.io/coredns:1.7.0
	k8s.gcr.io/pause:3.2
	
	-- /stdout --
	I0722 04:37:14.658652    6653 cache_images.go:84] Images are preloaded, skipping loading
	I0722 04:37:14.658662    6653 kubeadm.go:934] updating node { 192.169.0.33 8443 v1.31.0-beta.0 docker true true} ...
	I0722 04:37:14.658749    6653 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.31.0-beta.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-759000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.33
	
	[Install]
	 config:
	{KubernetesVersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 04:37:14.658825    6653 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 04:37:14.676410    6653 cni.go:84] Creating CNI manager for ""
	I0722 04:37:14.676428    6653 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:37:14.676439    6653 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 04:37:14.676452    6653 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.33 APIServerPort:8443 KubernetesVersion:v1.31.0-beta.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-759000 NodeName:kubernetes-upgrade-759000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.33"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.33 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 04:37:14.676549    6653 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.33
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "kubernetes-upgrade-759000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.33
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.33"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.31.0-beta.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 04:37:14.676612    6653 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.31.0-beta.0
	I0722 04:37:14.685146    6653 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 04:37:14.685191    6653 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0722 04:37:14.693759    6653 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (331 bytes)
	I0722 04:37:14.709136    6653 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (359 bytes)
	I0722 04:37:14.722490    6653 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0722 04:37:14.736133    6653 ssh_runner.go:195] Run: grep 192.169.0.33	control-plane.minikube.internal$ /etc/hosts
	I0722 04:37:14.739255    6653 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:37:14.835797    6653 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 04:37:14.847811    6653 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000 for IP: 192.169.0.33
	I0722 04:37:14.847823    6653 certs.go:194] generating shared ca certs ...
	I0722 04:37:14.847835    6653 certs.go:226] acquiring lock for ca certs: {Name:mk31b6ba3ba4e51acc59db740baf7c8ba8dd988b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:37:14.847988    6653 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key
	I0722 04:37:14.848040    6653 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key
	I0722 04:37:14.848049    6653 certs.go:256] generating profile certs ...
	I0722 04:37:14.848141    6653 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/client.key
	I0722 04:37:14.848195    6653 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/apiserver.key.34a616a6
	I0722 04:37:14.848251    6653 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.key
	I0722 04:37:14.848444    6653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem (1338 bytes)
	W0722 04:37:14.848480    6653 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637_empty.pem, impossibly tiny 0 bytes
	I0722 04:37:14.848488    6653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 04:37:14.848520    6653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem (1078 bytes)
	I0722 04:37:14.848551    6653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem (1123 bytes)
	I0722 04:37:14.848585    6653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem (1675 bytes)
	I0722 04:37:14.848647    6653 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem (1708 bytes)
	I0722 04:37:14.849116    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 04:37:14.868390    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 04:37:14.888253    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 04:37:14.908018    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 04:37:14.926604    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0722 04:37:14.945948    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 04:37:14.964853    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 04:37:14.983974    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 04:37:15.009705    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 04:37:15.059377    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem --> /usr/share/ca-certificates/1637.pem (1338 bytes)
	I0722 04:37:15.087480    6653 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /usr/share/ca-certificates/16372.pem (1708 bytes)
	I0722 04:37:15.110487    6653 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 04:37:15.127078    6653 ssh_runner.go:195] Run: openssl version
	I0722 04:37:15.132707    6653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16372.pem && ln -fs /usr/share/ca-certificates/16372.pem /etc/ssl/certs/16372.pem"
	I0722 04:37:15.142771    6653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16372.pem
	I0722 04:37:15.146587    6653 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:38 /usr/share/ca-certificates/16372.pem
	I0722 04:37:15.146630    6653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16372.pem
	I0722 04:37:15.156504    6653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16372.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 04:37:15.169167    6653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 04:37:15.180615    6653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:37:15.184430    6653 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:37:15.184473    6653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 04:37:15.189447    6653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 04:37:15.200591    6653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1637.pem && ln -fs /usr/share/ca-certificates/1637.pem /etc/ssl/certs/1637.pem"
	I0722 04:37:15.213294    6653 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1637.pem
	I0722 04:37:15.216853    6653 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:38 /usr/share/ca-certificates/1637.pem
	I0722 04:37:15.216898    6653 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1637.pem
	I0722 04:37:15.226104    6653 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1637.pem /etc/ssl/certs/51391683.0"
	I0722 04:37:15.246040    6653 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 04:37:15.249705    6653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 04:37:15.256041    6653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 04:37:15.263900    6653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 04:37:15.268843    6653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 04:37:15.274348    6653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 04:37:15.280083    6653 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 04:37:15.296511    6653 kubeadm.go:392] StartCluster: {Name:kubernetes-upgrade-759000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesV
ersion:v1.31.0-beta.0 ClusterName:kubernetes-upgrade-759000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:37:15.296628    6653 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 04:37:15.335650    6653 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 04:37:15.358764    6653 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 04:37:15.358782    6653 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 04:37:15.358834    6653 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 04:37:15.371973    6653 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:37:15.372439    6653 kubeconfig.go:125] found "kubernetes-upgrade-759000" server: "https://192.169.0.33:8443"
	I0722 04:37:15.373145    6653 kapi.go:59] client config for kubernetes-upgrade-759000: &rest.Config{Host:"https://192.169.0.33:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x8535ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 04:37:15.373631    6653 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 04:37:15.383218    6653 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.33
	I0722 04:37:15.383243    6653 kubeadm.go:1160] stopping kube-system containers ...
	I0722 04:37:15.383311    6653 ssh_runner.go:195] Run: docker ps -a --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 04:37:15.407468    6653 docker.go:483] Stopping containers: [5b5871b65204 bb36cb5c99f1 dea503c6c354 18260e563927 4272d2bbe762 1bfc2ea22815 07e001a16094 5d1b8ce77700 b4810764882e d77153ff662c 2e8aa4bc0b43 1909ab569579 d19f2f69a4a8 58001d7343de f8dd80b93359 a7df53184617 6e7c103faec3 0b5cec570c6b 3732a8f30cad aede2db35721 e5d983a56fe9 14b9e6134881 51e2c9ab2585]
	I0722 04:37:15.407549    6653 ssh_runner.go:195] Run: docker stop 5b5871b65204 bb36cb5c99f1 dea503c6c354 18260e563927 4272d2bbe762 1bfc2ea22815 07e001a16094 5d1b8ce77700 b4810764882e d77153ff662c 2e8aa4bc0b43 1909ab569579 d19f2f69a4a8 58001d7343de f8dd80b93359 a7df53184617 6e7c103faec3 0b5cec570c6b 3732a8f30cad aede2db35721 e5d983a56fe9 14b9e6134881 51e2c9ab2585
	I0722 04:37:15.752136    6653 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0722 04:37:15.783184    6653 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0722 04:37:15.790927    6653 kubeadm.go:157] found existing configuration files:
	-rw------- 1 root root 5647 Jul 22 11:36 /etc/kubernetes/admin.conf
	-rw------- 1 root root 5656 Jul 22 11:36 /etc/kubernetes/controller-manager.conf
	-rw------- 1 root root 5755 Jul 22 11:36 /etc/kubernetes/kubelet.conf
	-rw------- 1 root root 5600 Jul 22 11:36 /etc/kubernetes/scheduler.conf
	
	I0722 04:37:15.790977    6653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0722 04:37:15.797940    6653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0722 04:37:15.804988    6653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0722 04:37:15.811968    6653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:37:15.812004    6653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0722 04:37:15.819273    6653 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0722 04:37:15.826356    6653 kubeadm.go:163] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:37:15.826394    6653 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0722 04:37:15.833644    6653 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0722 04:37:15.841050    6653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:37:15.878928    6653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:37:17.265444    6653 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.386515724s)
	I0722 04:37:17.265458    6653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:37:17.421815    6653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:37:17.472559    6653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:37:17.527661    6653 api_server.go:52] waiting for apiserver process to appear ...
	I0722 04:37:17.527730    6653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:37:18.027842    6653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:37:18.528177    6653 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:37:18.545854    6653 api_server.go:72] duration metric: took 1.018214442s to wait for apiserver process to appear ...
	I0722 04:37:18.545868    6653 api_server.go:88] waiting for apiserver healthz status ...
	I0722 04:37:18.545884    6653 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0722 04:37:21.874177    6615 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m11.162080402s)
	I0722 04:37:21.874233    6615 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0722 04:37:21.925004    6615 out.go:177] 
	W0722 04:37:21.948417    6615 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 11:32:43 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:32:43 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:43.322897217Z" level=info msg="Starting up"
	Jul 22 11:32:43 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:43.323492105Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:32:43 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:43.324073372Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=521
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.341049798Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356794439Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356817586Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356853756Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356898236Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356953843Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.356983507Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357113073Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357148360Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357160981Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357168208Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357225996Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.357371911Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.358916055Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.358976076Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359113616Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359158135Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359248762Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.359314054Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362316010Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362422585Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362469431Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362502974Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362541289Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362648330Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.362917642Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363022226Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363060122Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363090167Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363124694Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363157045Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363188883Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363222669Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363253687Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363284667Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363314405Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363345290Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363382154Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363412973Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363444445Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363476995Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363506765Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363536663Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363574639Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363608382Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363643479Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363676224Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363706026Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363735555Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363764954Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363828368Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363871716Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363903689Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.363932631Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364005239Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364048673Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364083817Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364114365Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364142682Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364171786Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364199808Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364380192Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364466843Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364550817Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:32:43 cert-expiration-371000 dockerd[521]: time="2024-07-22T11:32:43.364661420Z" level=info msg="containerd successfully booted in 0.024289s"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.362731293Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.371910259Z" level=info msg="Loading containers: start."
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.456109809Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.555147082Z" level=info msg="Loading containers: done."
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.565961556Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.566146806Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.598133777Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:32:44 cert-expiration-371000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:32:44 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:44.598272993Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.679116249Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680243025Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680383169Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680476852Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:32:45 cert-expiration-371000 dockerd[515]: time="2024-07-22T11:32:45.680692650Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:32:45 cert-expiration-371000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:32:46 cert-expiration-371000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:32:46 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:32:46 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:32:46 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:46.716998963Z" level=info msg="Starting up"
	Jul 22 11:32:46 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:46.717405603Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:32:46 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:46.717986776Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=922
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.733305144Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748886663Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748934755Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748963573Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748973368Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.748995923Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749005227Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749112263Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749146057Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749157577Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749165359Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749181310Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.749259822Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.750882698Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.750920587Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751022318Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751088314Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751135334Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751151872Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751277492Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751318554Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751330378Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751376982Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751390281Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751423941Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751620215Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751679579Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751690057Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751698277Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751707009Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751715432Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751723322Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751734781Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751756918Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751774975Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751835808Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751846288Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751859760Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751869554Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751878062Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751886665Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751897155Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751905917Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751914182Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751922015Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751930389Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751939470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751946901Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751954376Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751963131Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751972629Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751985512Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.751993598Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752006399Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752064058Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752099831Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752110299Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752118997Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752125353Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752133451Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752140494Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752270624Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752327147Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752380734Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:32:46 cert-expiration-371000 dockerd[922]: time="2024-07-22T11:32:46.752393036Z" level=info msg="containerd successfully booted in 0.019439s"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.779187600Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.783289140Z" level=info msg="Loading containers: start."
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.853697434Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.917056868Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.962462016Z" level=info msg="Loading containers: done."
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.972189510Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.972241479Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.992261218Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:32:47 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:47.992383110Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:32:47 cert-expiration-371000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:32:52 cert-expiration-371000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.334487375Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335280351Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335725715Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335787211Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:32:52 cert-expiration-371000 dockerd[915]: time="2024-07-22T11:32:52.335835743Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:32:53 cert-expiration-371000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:32:53 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:32:53 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:53.378795963Z" level=info msg="Starting up"
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:53.379416464Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:53.380013845Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=1277
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.398154396Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413150759Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413253463Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413287903Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413298214Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413318622Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413327172Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413440581Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413475523Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413487912Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413495381Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413553634Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.413638178Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415216795Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415256876Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415377546Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415417951Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415444278Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415460673Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415649707Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415695347Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415707920Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415718198Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415727896Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415758043Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.415963714Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416260801Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416413598Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416460259Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416499053Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416567113Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416608039Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416645140Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416679124Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416719129Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416753890Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416787087Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416827012Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416868495Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416904463Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416939000Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.416970647Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417006657Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417040469Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417216916Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417306116Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417413424Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417459208Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417491616Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417581449Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417628567Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417667750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417699640Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417730343Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417800304Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417844047Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417934917Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.417973244Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418005983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418037857Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418066959Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418300276Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418436481Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418497979Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:32:53 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:32:53.418620867Z" level=info msg="containerd successfully booted in 0.021136s"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.422975059Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.462125066Z" level=info msg="Loading containers: start."
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.540887898Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.609580799Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.654735495Z" level=info msg="Loading containers: done."
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.664478385Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.664536821Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.686004344Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:32:54 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:32:54.686170919Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:32:54 cert-expiration-371000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666729921Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666859841Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666886673Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.666991816Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671283844Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671395509Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671463482Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.671619179Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693511149Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693755129Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693780949Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.693914087Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698164471Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698314388Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698346178Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.698485089Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.853776336Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.854028415Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.854226121Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.855183798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893462893Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893677164Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893780385Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.893929246Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896000349Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896168192Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896304132Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.896506004Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.897720119Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.898591534Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.898795377Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:00 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:00.898914838Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.992697696Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.992887996Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.992988376Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:20.993945882Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013369995Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013453432Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013569461Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.013637798Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.109580127Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.109624213Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.111980705Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.112534350Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.149181846Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.149942878Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.149982036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.150124691Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187422135Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187500587Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187513164Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.187624942Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.431879591Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.432069689Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.432097896Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:21 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:21.432179926Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:33:51.304160418Z" level=info msg="ignoring event" container=cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:51.305030242Z" level=info msg="shim disconnected" id=cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a namespace=moby
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:51.305266617Z" level=warning msg="cleaning up after shim disconnected" id=cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a namespace=moby
	Jul 22 11:33:51 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:51.305309676Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001231815Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001276275Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001287778Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:33:52 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:33:52.001529301Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 11:36:10 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:10.949305583Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:36:10 cert-expiration-371000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.055519397Z" level=info msg="ignoring event" container=6cd74977b72c47babb3642056e1e003202b927d2b85a9b0a288c27a2f56e2af0 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.055632768Z" level=info msg="shim disconnected" id=6cd74977b72c47babb3642056e1e003202b927d2b85a9b0a288c27a2f56e2af0 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.055684998Z" level=warning msg="cleaning up after shim disconnected" id=6cd74977b72c47babb3642056e1e003202b927d2b85a9b0a288c27a2f56e2af0 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.055694275Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.060155366Z" level=info msg="ignoring event" container=aee4a4beeddedd8804681224cbfc20d35f29944e81f87fe058c3f98b7e818836 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.060613821Z" level=info msg="shim disconnected" id=aee4a4beeddedd8804681224cbfc20d35f29944e81f87fe058c3f98b7e818836 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.060665015Z" level=warning msg="cleaning up after shim disconnected" id=aee4a4beeddedd8804681224cbfc20d35f29944e81f87fe058c3f98b7e818836 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.060673512Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.071258735Z" level=info msg="shim disconnected" id=b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.071321680Z" level=warning msg="cleaning up after shim disconnected" id=b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.071330162Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.071822404Z" level=info msg="ignoring event" container=b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.083858512Z" level=info msg="ignoring event" container=9ab0d9f14597af327f3d3e990acbb781687233a04ac2ab428f68d5719cb48a44 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.084042816Z" level=info msg="shim disconnected" id=9ab0d9f14597af327f3d3e990acbb781687233a04ac2ab428f68d5719cb48a44 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.084240546Z" level=warning msg="cleaning up after shim disconnected" id=9ab0d9f14597af327f3d3e990acbb781687233a04ac2ab428f68d5719cb48a44 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.084538383Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101564761Z" level=info msg="ignoring event" container=c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101611543Z" level=info msg="ignoring event" container=8c4b1ea747198c6eba326546f20cf1831b1163c264b24dd74367cff7dae94a61 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101634153Z" level=info msg="ignoring event" container=87a58d6a9a40d07fbf432db4235d6334a54e6eb3636aa010abeb83abfa060d90 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101645157Z" level=info msg="ignoring event" container=4dffe484555ddf8f830dc7921536167194cf21f0c477336565fc99ded54bcbec module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.101658190Z" level=info msg="ignoring event" container=6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.102863333Z" level=info msg="shim disconnected" id=6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.102938501Z" level=warning msg="cleaning up after shim disconnected" id=6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.102969371Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.103423978Z" level=info msg="shim disconnected" id=c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.103495279Z" level=warning msg="cleaning up after shim disconnected" id=c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.103504099Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.106924516Z" level=info msg="shim disconnected" id=4dffe484555ddf8f830dc7921536167194cf21f0c477336565fc99ded54bcbec namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.106989796Z" level=warning msg="cleaning up after shim disconnected" id=4dffe484555ddf8f830dc7921536167194cf21f0c477336565fc99ded54bcbec namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.107020943Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.113957654Z" level=info msg="shim disconnected" id=4f2c8e5cf3a62fe16b8fb7eeb3a3bae53a1289352d5666d477ab5834080f9eb1 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114007541Z" level=warning msg="cleaning up after shim disconnected" id=4f2c8e5cf3a62fe16b8fb7eeb3a3bae53a1289352d5666d477ab5834080f9eb1 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114016088Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114379509Z" level=info msg="shim disconnected" id=8c4b1ea747198c6eba326546f20cf1831b1163c264b24dd74367cff7dae94a61 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114435886Z" level=warning msg="cleaning up after shim disconnected" id=8c4b1ea747198c6eba326546f20cf1831b1163c264b24dd74367cff7dae94a61 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.114464669Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.121911240Z" level=info msg="shim disconnected" id=54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.121960224Z" level=warning msg="cleaning up after shim disconnected" id=54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.121968604Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.122120128Z" level=info msg="ignoring event" container=54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.122165975Z" level=info msg="ignoring event" container=4f2c8e5cf3a62fe16b8fb7eeb3a3bae53a1289352d5666d477ab5834080f9eb1 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.122900962Z" level=info msg="shim disconnected" id=87a58d6a9a40d07fbf432db4235d6334a54e6eb3636aa010abeb83abfa060d90 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.122943023Z" level=warning msg="cleaning up after shim disconnected" id=87a58d6a9a40d07fbf432db4235d6334a54e6eb3636aa010abeb83abfa060d90 namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.122951642Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:11.128985604Z" level=info msg="ignoring event" container=ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.135347133Z" level=info msg="shim disconnected" id=ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.135399796Z" level=warning msg="cleaning up after shim disconnected" id=ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.135408738Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.137258688Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.149775083Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.154931313Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.165018049Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:11 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:11.188423719Z" level=warning msg="cleanup warnings time=\"2024-07-22T11:36:11Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:15.987849601Z" level=info msg="shim disconnected" id=6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:15.991210759Z" level=warning msg="cleaning up after shim disconnected" id=6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:15.991257574Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:15 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:15.993985949Z" level=info msg="ignoring event" container=6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:20.976155260Z" level=info msg="Container failed to exit within 10s of signal 15 - using the force" container=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:20.999099298Z" level=info msg="ignoring event" container=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538 module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:20.999005911Z" level=info msg="shim disconnected" id=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538 namespace=moby
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:20.999507624Z" level=warning msg="cleaning up after shim disconnected" id=1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538 namespace=moby
	Jul 22 11:36:20 cert-expiration-371000 dockerd[1277]: time="2024-07-22T11:36:20.999618875Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.019668441Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.020089991Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.020153536Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:36:21 cert-expiration-371000 dockerd[1271]: time="2024-07-22T11:36:21.020190386Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: docker.service: Consumed 3.263s CPU time.
	Jul 22 11:36:22 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:36:22 cert-expiration-371000 dockerd[3833]: time="2024-07-22T11:36:22.054447432Z" level=info msg="Starting up"
	Jul 22 11:37:22 cert-expiration-371000 dockerd[3833]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 11:37:22 cert-expiration-371000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 11:37:22 cert-expiration-371000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 11:37:22 cert-expiration-371000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0722 04:37:21.948984    6615 out.go:239] * 
	W0722 04:37:21.950248    6615 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:37:22.033056    6615 out.go:177] 
	I0722 04:37:20.640351    6653 api_server.go:279] https://192.169.0.33:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0722 04:37:20.640372    6653 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0722 04:37:20.640381    6653 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0722 04:37:20.748094    6653 api_server.go:279] https://192.169.0.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 04:37:20.748111    6653 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 04:37:21.046421    6653 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0722 04:37:21.051621    6653 api_server.go:279] https://192.169.0.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 04:37:21.051639    6653 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 04:37:21.546725    6653 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0722 04:37:21.551916    6653 api_server.go:279] https://192.169.0.33:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	W0722 04:37:21.551927    6653 api_server.go:103] status: https://192.169.0.33:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	healthz check failed
	I0722 04:37:22.046416    6653 api_server.go:253] Checking apiserver healthz at https://192.169.0.33:8443/healthz ...
	I0722 04:37:22.051312    6653 api_server.go:279] https://192.169.0.33:8443/healthz returned 200:
	ok
	I0722 04:37:22.055967    6653 api_server.go:141] control plane version: v1.31.0-beta.0
	I0722 04:37:22.055980    6653 api_server.go:131] duration metric: took 3.510166416s to wait for apiserver health ...
	I0722 04:37:22.055986    6653 cni.go:84] Creating CNI manager for ""
	I0722 04:37:22.055994    6653 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:37:22.077050    6653 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0722 04:37:22.098019    6653 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0722 04:37:22.107602    6653 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (496 bytes)
	I0722 04:37:22.122375    6653 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 04:37:22.127710    6653 system_pods.go:59] 5 kube-system pods found
	I0722 04:37:22.127727    6653 system_pods.go:61] "etcd-kubernetes-upgrade-759000" [b1bb6bcd-a2c4-4891-9307-558f1c99a79d] Pending
	I0722 04:37:22.127731    6653 system_pods.go:61] "kube-apiserver-kubernetes-upgrade-759000" [db417c5f-95a5-4894-ba0b-6185134dd663] Pending
	I0722 04:37:22.127734    6653 system_pods.go:61] "kube-controller-manager-kubernetes-upgrade-759000" [48bac9ea-1ab0-4f5e-bd73-ec6391cc5f08] Pending
	I0722 04:37:22.127737    6653 system_pods.go:61] "kube-scheduler-kubernetes-upgrade-759000" [b76ecbc0-36a3-4668-82e8-be242bd28eb6] Pending
	I0722 04:37:22.127742    6653 system_pods.go:61] "storage-provisioner" [84fa8616-dcd9-4483-b834-7daaeb461509] Pending: PodScheduled:Unschedulable (0/1 nodes are available: 1 node(s) had untolerated taint {node.kubernetes.io/not-ready: }. preemption: 0/1 nodes are available: 1 Preemption is not helpful for scheduling.)
	I0722 04:37:22.127754    6653 system_pods.go:74] duration metric: took 5.368318ms to wait for pod list to return data ...
	I0722 04:37:22.127762    6653 node_conditions.go:102] verifying NodePressure condition ...
	I0722 04:37:22.130459    6653 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 04:37:22.130476    6653 node_conditions.go:123] node cpu capacity is 2
	I0722 04:37:22.130488    6653 node_conditions.go:105] duration metric: took 2.720658ms to run NodePressure ...
	I0722 04:37:22.130499    6653 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.31.0-beta.0:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0722 04:37:22.372514    6653 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0722 04:37:22.381498    6653 ops.go:34] apiserver oom_adj: -16
	I0722 04:37:22.381506    6653 kubeadm.go:597] duration metric: took 7.022838558s to restartPrimaryControlPlane
	I0722 04:37:22.381512    6653 kubeadm.go:394] duration metric: took 7.085130873s to StartCluster
	I0722 04:37:22.381521    6653 settings.go:142] acquiring lock: {Name:mk61cf5b2a74edb35dda57ecbe8abc2ea6c58c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:37:22.381591    6653 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 04:37:22.382131    6653 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/kubeconfig: {Name:mkf2b240918cd66dabf425a67d7df0a0c9aa8c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:37:22.401186    6653 start.go:235] Will wait 6m0s for node &{Name: IP:192.169.0.33 Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:37:22.401232    6653 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 04:37:22.401443    6653 config.go:182] Loaded profile config "kubernetes-upgrade-759000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.31.0-beta.0
	I0722 04:37:22.422014    6653 addons.go:69] Setting default-storageclass=true in profile "kubernetes-upgrade-759000"
	I0722 04:37:22.422036    6653 addons.go:69] Setting storage-provisioner=true in profile "kubernetes-upgrade-759000"
	I0722 04:37:22.422084    6653 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "kubernetes-upgrade-759000"
	I0722 04:37:22.422106    6653 addons.go:234] Setting addon storage-provisioner=true in "kubernetes-upgrade-759000"
	W0722 04:37:22.422148    6653 addons.go:243] addon storage-provisioner should already be in state true
	I0722 04:37:22.422228    6653 host.go:66] Checking if "kubernetes-upgrade-759000" exists ...
	I0722 04:37:22.422622    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:37:22.422686    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:37:22.422946    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:37:22.423004    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:37:22.432840    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54714
	I0722 04:37:22.432964    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54715
	I0722 04:37:22.433187    6653 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:37:22.433293    6653 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:37:22.433524    6653 main.go:141] libmachine: Using API Version  1
	I0722 04:37:22.433536    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:37:22.433633    6653 main.go:141] libmachine: Using API Version  1
	I0722 04:37:22.433647    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:37:22.433723    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:37:22.433826    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetState
	I0722 04:37:22.433848    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:37:22.433907    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:37:22.433984    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) DBG | hyperkit pid from json: 6608
	I0722 04:37:22.434198    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:37:22.434227    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:37:22.436406    6653 kapi.go:59] client config for kubernetes-upgrade-759000: &rest.Config{Host:"https://192.169.0.33:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/kubernetes-upgrade-759000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil),
CAData:[]uint8(nil), NextProtos:[]string(nil)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x8535ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 04:37:22.436802    6653 addons.go:234] Setting addon default-storageclass=true in "kubernetes-upgrade-759000"
	W0722 04:37:22.436810    6653 addons.go:243] addon default-storageclass should already be in state true
	I0722 04:37:22.436828    6653 host.go:66] Checking if "kubernetes-upgrade-759000" exists ...
	I0722 04:37:22.437038    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:37:22.437062    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:37:22.442844    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54718
	I0722 04:37:22.443183    6653 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:37:22.443532    6653 main.go:141] libmachine: Using API Version  1
	I0722 04:37:22.443546    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:37:22.443767    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:37:22.443897    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetState
	I0722 04:37:22.444004    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:37:22.444071    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) DBG | hyperkit pid from json: 6608
	I0722 04:37:22.445058    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:37:22.464682    6653 out.go:177] * Verifying Kubernetes components...
	I0722 04:37:22.445263    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54720
	I0722 04:37:22.465462    6653 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:37:22.486909    6653 main.go:141] libmachine: Using API Version  1
	I0722 04:37:22.486934    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:37:22.487357    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:37:22.487998    6653 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:37:22.488048    6653 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:37:22.496953    6653 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:54722
	I0722 04:37:22.497302    6653 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:37:22.497587    6653 main.go:141] libmachine: Using API Version  1
	I0722 04:37:22.497596    6653 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:37:22.497818    6653 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:37:22.497922    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetState
	I0722 04:37:22.498011    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:37:22.498082    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) DBG | hyperkit pid from json: 6608
	I0722 04:37:22.499040    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .DriverName
	I0722 04:37:22.499162    6653 addons.go:431] installing /etc/kubernetes/addons/storageclass.yaml
	I0722 04:37:22.499169    6653 ssh_runner.go:362] scp storageclass/storageclass.yaml --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0722 04:37:22.499177    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHHostname
	I0722 04:37:22.499254    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHPort
	I0722 04:37:22.499362    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHKeyPath
	I0722 04:37:22.499446    6653 main.go:141] libmachine: (kubernetes-upgrade-759000) Calling .GetSSHUsername
	I0722 04:37:22.499533    6653 sshutil.go:53] new ssh client: &{IP:192.169.0.33 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/kubernetes-upgrade-759000/id_rsa Username:docker}
	I0722 04:37:22.506633    6653 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	
	
	==> Docker <==
	Jul 22 11:38:22 cert-expiration-371000 dockerd[4037]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="error getting RW layer size for container ID 'b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'b989a500b3290d74306522d768cfe41a76bcbaa69ebbb7214f8be2d30b91e6e8'"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="error getting RW layer size for container ID 'cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'cec9f33fba6b356332cb941816485fe9b11cc485fa6de54abcb44df6da550c2a'"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="error getting RW layer size for container ID 'c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'c2c94db7394f6f26ce2efcc58ee8e0483392e8239c8849288c604c397db0a9dd'"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="error getting RW layer size for container ID 'ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:38:22 cert-expiration-371000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID 'ebf670c2055067a40346faf90b3b3c581dbc0d0c0340e62c79cf044c67aab6ae'"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="error getting RW layer size for container ID '54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:38:22 cert-expiration-371000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 11:38:22 cert-expiration-371000 systemd[1]: Failed to start Docker Application Container Engine.
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID '54c6e229dc5bf006ae5b0208e17f992f566b49d9f23bfa8807052b951312f371'"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="error getting RW layer size for container ID '1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID '1a9a78693f7e73e0aa80ea1efb84be081dbcc81bc962e3df19febf2f55c84538'"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="Unable to get docker version: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peerFailed to get image list from docker"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="error getting RW layer size for container ID '6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6d8a2bf7202a72a52a67ab1d3544126fb67dffea5f0b9be5d34a187b1679565b'"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="error getting RW layer size for container ID '6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67': error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67/json?size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:38:22 cert-expiration-371000 cri-dockerd[1168]: time="2024-07-22T11:38:22Z" level=error msg="Set backoffDuration to : 1m0s for container ID '6046af764b3eb3700b1c9a05253cc6cea3aa93e9aaae28be25ee5f2363ff3e67'"
	Jul 22 11:38:22 cert-expiration-371000 systemd[1]: docker.service: Scheduled restart job, restart counter is at 2.
	Jul 22 11:38:22 cert-expiration-371000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:38:22 cert-expiration-371000 systemd[1]: Starting Docker Application Container Engine...
	
	
	==> container status <==
	command /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a" failed with error: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a": Process exited with status 1
	stdout:
	
	stderr:
	time="2024-07-22T11:38:24Z" level=fatal msg="validate service connection: validate CRI v1 runtime API for endpoint \"unix:///var/run/cri-dockerd.sock\": rpc error: code = DeadlineExceeded desc = context deadline exceeded"
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.30.3/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[  +0.093640] systemd-fstab-generator[507]: Ignoring "noauto" option for root device
	[  +1.801700] systemd-fstab-generator[842]: Ignoring "noauto" option for root device
	[  +0.341265] systemd-fstab-generator[880]: Ignoring "noauto" option for root device
	[  +0.053149] kauditd_printk_skb: 95 callbacks suppressed
	[  +0.060095] systemd-fstab-generator[892]: Ignoring "noauto" option for root device
	[  +0.125565] systemd-fstab-generator[906]: Ignoring "noauto" option for root device
	[  +2.477236] systemd-fstab-generator[1121]: Ignoring "noauto" option for root device
	[  +0.099585] systemd-fstab-generator[1133]: Ignoring "noauto" option for root device
	[  +0.099794] systemd-fstab-generator[1145]: Ignoring "noauto" option for root device
	[  +0.119336] systemd-fstab-generator[1160]: Ignoring "noauto" option for root device
	[  +3.863560] systemd-fstab-generator[1263]: Ignoring "noauto" option for root device
	[  +0.051645] kauditd_printk_skb: 180 callbacks suppressed
	[  +2.526064] systemd-fstab-generator[1508]: Ignoring "noauto" option for root device
	[  +4.149859] systemd-fstab-generator[1689]: Ignoring "noauto" option for root device
	[  +0.059673] kauditd_printk_skb: 70 callbacks suppressed
	[Jul22 11:33] systemd-fstab-generator[2096]: Ignoring "noauto" option for root device
	[  +0.096214] kauditd_printk_skb: 62 callbacks suppressed
	[  +1.260184] systemd-fstab-generator[2163]: Ignoring "noauto" option for root device
	[ +13.384942] kauditd_printk_skb: 34 callbacks suppressed
	[ +30.280022] kauditd_printk_skb: 57 callbacks suppressed
	[Jul22 11:36] systemd-fstab-generator[3370]: Ignoring "noauto" option for root device
	[  +0.275534] systemd-fstab-generator[3405]: Ignoring "noauto" option for root device
	[  +0.135395] systemd-fstab-generator[3417]: Ignoring "noauto" option for root device
	[  +0.138792] systemd-fstab-generator[3431]: Ignoring "noauto" option for root device
	[  +5.141733] kauditd_printk_skb: 91 callbacks suppressed
	
	
	==> kernel <==
	 11:39:22 up 6 min,  0 users,  load average: 0.06, 0.20, 0.11
	Linux cert-expiration-371000 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Jul 22 11:39:14 cert-expiration-371000 kubelet[2103]: E0722 11:39:14.172645    2103 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"cert-expiration-371000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-371000?resourceVersion=0&timeout=10s\": dial tcp 192.169.0.31:8443: connect: connection refused"
	Jul 22 11:39:14 cert-expiration-371000 kubelet[2103]: E0722 11:39:14.173538    2103 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"cert-expiration-371000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-371000?timeout=10s\": dial tcp 192.169.0.31:8443: connect: connection refused"
	Jul 22 11:39:14 cert-expiration-371000 kubelet[2103]: E0722 11:39:14.174241    2103 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"cert-expiration-371000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-371000?timeout=10s\": dial tcp 192.169.0.31:8443: connect: connection refused"
	Jul 22 11:39:14 cert-expiration-371000 kubelet[2103]: E0722 11:39:14.174793    2103 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"cert-expiration-371000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-371000?timeout=10s\": dial tcp 192.169.0.31:8443: connect: connection refused"
	Jul 22 11:39:14 cert-expiration-371000 kubelet[2103]: E0722 11:39:14.175339    2103 kubelet_node_status.go:544] "Error updating node status, will retry" err="error getting node \"cert-expiration-371000\": Get \"https://control-plane.minikube.internal:8443/api/v1/nodes/cert-expiration-371000?timeout=10s\": dial tcp 192.169.0.31:8443: connect: connection refused"
	Jul 22 11:39:14 cert-expiration-371000 kubelet[2103]: E0722 11:39:14.175387    2103 kubelet_node_status.go:531] "Unable to update node status" err="update node status exceeds retry count"
	Jul 22 11:39:16 cert-expiration-371000 kubelet[2103]: I0722 11:39:16.579299    2103 status_manager.go:853] "Failed to get status for pod" podUID="f166bfd864e672c5734102d3c2275a35" pod="kube-system/kube-apiserver-cert-expiration-371000" err="Get \"https://control-plane.minikube.internal:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-cert-expiration-371000\": dial tcp 192.169.0.31:8443: connect: connection refused"
	Jul 22 11:39:18 cert-expiration-371000 kubelet[2103]: E0722 11:39:18.462347    2103 kubelet.go:2370] "Skipping pod synchronization" err="[container runtime is down, PLEG is not healthy: pleg was last seen active 3m7.803415534s ago; threshold is 3m0s, container runtime not ready: RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/version\": read unix @->/var/run/docker.sock: read: connection reset by peer]"
	Jul 22 11:39:21 cert-expiration-371000 kubelet[2103]: E0722 11:39:21.117314    2103 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://control-plane.minikube.internal:8443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/cert-expiration-371000?timeout=10s\": dial tcp 192.169.0.31:8443: connect: connection refused" interval="7s"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.639824    2103 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640090    2103 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640212    2103 remote_image.go:232] "ImageFsInfo from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640262    2103 eviction_manager.go:282] "Eviction manager: failed to get summary stats" err="failed to get imageFs stats: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json?all=1&shared-size=1\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640299    2103 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640337    2103 container_log_manager.go:194] "Failed to rotate container logs" err="failed to list containers: rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640387    2103 remote_runtime.go:294] "ListPodSandbox with filter from runtime service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640426    2103 kuberuntime_sandbox.go:297] "Failed to list pod sandboxes" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640457    2103 generic.go:238] "GenericPLEG: Unable to retrieve pods" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)podsandbox%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640508    2103 kubelet.go:2919] "Container runtime not ready" runtimeReady="RuntimeReady=false reason:DockerDaemonNotReady message:docker: failed to get docker version: failed to get docker version from dockerd: Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640552    2103 remote_image.go:128] "ListImages with filter from image service failed" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer" filter="nil"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640588    2103 kuberuntime_image.go:117] "Failed to list images" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: I0722 11:39:22.640618    2103 image_gc_manager.go:222] "Failed to update image list" err="rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/images/json\": read unix @->/var/run/docker.sock: read: connection reset by peer"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640856    2103 remote_runtime.go:407] "ListContainers with filter from runtime service failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?" filter="&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},}"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.640903    2103 kuberuntime_container.go:495] "ListContainers failed" err="rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?"
	Jul 22 11:39:22 cert-expiration-371000 kubelet[2103]: E0722 11:39:22.641129    2103 kubelet.go:1436] "Container garbage collection failed" err="[rpc error: code = Unknown desc = error during connect: Get \"http://%!F(MISSING)var%!F(MISSING)run%!F(MISSING)docker.sock/v1.43/containers/json?all=1&filters=%!B(MISSING)%!l(MISSING)abel%3A%!B(MISSING)%!i(MISSING)o.kubernetes.docker.type%!D(MISSING)container%3Atrue%!D(MISSING)%!D(MISSING)\": read unix @->/var/run/docker.sock: read: connection reset by peer, rpc error: code = Unknown desc = Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?]"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 04:38:22.248614    6669 logs.go:273] Failed to list containers for "kube-apiserver": docker: docker ps -a --filter=name=k8s_kube-apiserver --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 04:38:22.261033    6669 logs.go:273] Failed to list containers for "etcd": docker: docker ps -a --filter=name=k8s_etcd --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 04:38:22.272285    6669 logs.go:273] Failed to list containers for "coredns": docker: docker ps -a --filter=name=k8s_coredns --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 04:38:22.283551    6669 logs.go:273] Failed to list containers for "kube-scheduler": docker: docker ps -a --filter=name=k8s_kube-scheduler --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 04:38:22.296154    6669 logs.go:273] Failed to list containers for "kube-proxy": docker: docker ps -a --filter=name=k8s_kube-proxy --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 04:38:22.307585    6669 logs.go:273] Failed to list containers for "kube-controller-manager": docker: docker ps -a --filter=name=k8s_kube-controller-manager --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 04:38:22.318178    6669 logs.go:273] Failed to list containers for "kindnet": docker: docker ps -a --filter=name=k8s_kindnet --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	E0722 04:38:22.329813    6669 logs.go:273] Failed to list containers for "storage-provisioner": docker: docker ps -a --filter=name=k8s_storage-provisioner --format={{.ID}}: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p cert-expiration-371000 -n cert-expiration-371000
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p cert-expiration-371000 -n cert-expiration-371000: exit status 2 (145.310053ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "cert-expiration-371000" apiserver is not running, skipping kubectl commands (state="Stopped")
helpers_test.go:175: Cleaning up "cert-expiration-371000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-expiration-371000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-expiration-371000: (5.25932101s)
--- FAIL: TestCertExpiration (417.63s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartCluster (203.77s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-090000 --wait=true -v=7 --alsologtostderr --driver=hyperkit 
E0722 03:56:22.388514    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:57:46.601880    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
ha_test.go:560: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p ha-090000 --wait=true -v=7 --alsologtostderr --driver=hyperkit : exit status 90 (3m19.937244553s)

                                                
                                                
-- stdout --
	* [ha-090000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	* Starting "ha-090000" primary control-plane node in "ha-090000" cluster
	* Restarting existing hyperkit VM for "ha-090000" ...
	* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	* Enabled addons: 
	
	* Starting "ha-090000-m02" control-plane node in "ha-090000" cluster
	* Restarting existing hyperkit VM for "ha-090000-m02" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5
	* Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	  - env NO_PROXY=192.169.0.5
	* Verifying Kubernetes components...
	
	* Starting "ha-090000-m04" worker node in "ha-090000" cluster
	* Restarting existing hyperkit VM for "ha-090000-m04" ...
	* Found network options:
	  - NO_PROXY=192.169.0.5,192.169.0.6
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:55:14.001165    3911 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:55:14.001338    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:14.001344    3911 out.go:304] Setting ErrFile to fd 2...
	I0722 03:55:14.001348    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:14.001524    3911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 03:55:14.002913    3911 out.go:298] Setting JSON to false
	I0722 03:55:14.025317    3911 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3283,"bootTime":1721642431,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 03:55:14.025414    3911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:55:14.048097    3911 out.go:177] * [ha-090000] minikube v1.33.1 on Darwin 14.5
	I0722 03:55:14.089944    3911 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 03:55:14.089999    3911 notify.go:220] Checking for updates...
	I0722 03:55:14.132553    3911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:14.153953    3911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 03:55:14.177091    3911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:55:14.197830    3911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	I0722 03:55:14.219112    3911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 03:55:14.240693    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:14.241352    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.241433    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:14.250957    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51948
	I0722 03:55:14.251322    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:14.251741    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:14.251758    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:14.252024    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:14.252166    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.252364    3911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:55:14.252613    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.252647    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:14.260865    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51950
	I0722 03:55:14.261199    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:14.261501    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:14.261519    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:14.261723    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:14.261829    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.289710    3911 out.go:177] * Using the hyperkit driver based on existing profile
	I0722 03:55:14.331989    3911 start.go:297] selected driver: hyperkit
	I0722 03:55:14.332015    3911 start.go:901] validating driver "hyperkit" against &{Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:55:14.332262    3911 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 03:55:14.332464    3911 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:55:14.332656    3911 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19313-1111/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0722 03:55:14.342163    3911 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0722 03:55:14.345875    3911 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.345899    3911 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0722 03:55:14.348432    3911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 03:55:14.348467    3911 cni.go:84] Creating CNI manager for ""
	I0722 03:55:14.348473    3911 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 03:55:14.348551    3911 start.go:340] cluster config:
	{Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:55:14.348667    3911 iso.go:125] acquiring lock: {Name:mk28fa3b914b659bb36b0449a0ad3ab1345dae32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:55:14.390735    3911 out.go:177] * Starting "ha-090000" primary control-plane node in "ha-090000" cluster
	I0722 03:55:14.412034    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:14.412101    3911 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 03:55:14.412134    3911 cache.go:56] Caching tarball of preloaded images
	I0722 03:55:14.412332    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 03:55:14.412374    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:55:14.412547    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:14.413322    3911 start.go:360] acquireMachinesLock for ha-090000: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:55:14.413444    3911 start.go:364] duration metric: took 104.878µs to acquireMachinesLock for "ha-090000"
	I0722 03:55:14.413466    3911 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:55:14.413480    3911 fix.go:54] fixHost starting: 
	I0722 03:55:14.413779    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.413805    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:14.422850    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51952
	I0722 03:55:14.423211    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:14.423607    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:14.423626    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:14.423868    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:14.424010    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.424163    3911 main.go:141] libmachine: (ha-090000) Calling .GetState
	I0722 03:55:14.424269    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:14.424340    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3743
	I0722 03:55:14.425373    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid 3743 missing from process table
	I0722 03:55:14.425407    3911 fix.go:112] recreateIfNeeded on ha-090000: state=Stopped err=<nil>
	I0722 03:55:14.425425    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	W0722 03:55:14.425550    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:55:14.467917    3911 out.go:177] * Restarting existing hyperkit VM for "ha-090000" ...
	I0722 03:55:14.490898    3911 main.go:141] libmachine: (ha-090000) Calling .Start
	I0722 03:55:14.491161    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:14.491206    3911 main.go:141] libmachine: (ha-090000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid
	I0722 03:55:14.492929    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid 3743 missing from process table
	I0722 03:55:14.492946    3911 main.go:141] libmachine: (ha-090000) DBG | pid 3743 is in state "Stopped"
	I0722 03:55:14.492978    3911 main.go:141] libmachine: (ha-090000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid...
	I0722 03:55:14.493148    3911 main.go:141] libmachine: (ha-090000) DBG | Using UUID 865eb55d-4879-4f09-8c93-9ca2b7f6f541
	I0722 03:55:14.657956    3911 main.go:141] libmachine: (ha-090000) DBG | Generated MAC de:e:68:47:cf:44
	I0722 03:55:14.657983    3911 main.go:141] libmachine: (ha-090000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000
	I0722 03:55:14.658095    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"865eb55d-4879-4f09-8c93-9ca2b7f6f541", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:14.658125    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"865eb55d-4879-4f09-8c93-9ca2b7f6f541", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:14.658167    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "865eb55d-4879-4f09-8c93-9ca2b7f6f541", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/ha-090000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"}
	I0722 03:55:14.658258    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 865eb55d-4879-4f09-8c93-9ca2b7f6f541 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/ha-090000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"
	I0722 03:55:14.658283    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 03:55:14.659556    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Pid is 3926
	I0722 03:55:14.659971    3911 main.go:141] libmachine: (ha-090000) DBG | Attempt 0
	I0722 03:55:14.659983    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:14.660096    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3926
	I0722 03:55:14.661907    3911 main.go:141] libmachine: (ha-090000) DBG | Searching for de:e:68:47:cf:44 in /var/db/dhcpd_leases ...
	I0722 03:55:14.661965    3911 main.go:141] libmachine: (ha-090000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0722 03:55:14.661986    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 03:55:14.662001    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8bc8}
	I0722 03:55:14.662012    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8b16}
	I0722 03:55:14.662031    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8aec}
	I0722 03:55:14.662047    3911 main.go:141] libmachine: (ha-090000) DBG | Found match: de:e:68:47:cf:44
	I0722 03:55:14.662058    3911 main.go:141] libmachine: (ha-090000) DBG | IP: 192.169.0.5
	I0722 03:55:14.662088    3911 main.go:141] libmachine: (ha-090000) Calling .GetConfigRaw
	I0722 03:55:14.662970    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:14.663190    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:14.663619    3911 machine.go:94] provisionDockerMachine start ...
	I0722 03:55:14.663631    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.663775    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:14.663892    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:14.663995    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:14.664107    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:14.664217    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:14.664369    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:14.664624    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:14.664637    3911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:55:14.668018    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 03:55:14.726271    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 03:55:14.726986    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:14.727016    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:14.727030    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:14.727041    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:15.102308    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 03:55:15.102323    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 03:55:15.217057    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:15.217079    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:15.217092    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:15.217103    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:15.217955    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 03:55:15.217966    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 03:55:20.486836    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 03:55:20.486863    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 03:55:20.486878    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 03:55:20.511003    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 03:55:49.725974    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 03:55:49.725988    3911 main.go:141] libmachine: (ha-090000) Calling .GetMachineName
	I0722 03:55:49.726125    3911 buildroot.go:166] provisioning hostname "ha-090000"
	I0722 03:55:49.726138    3911 main.go:141] libmachine: (ha-090000) Calling .GetMachineName
	I0722 03:55:49.726243    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.726335    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:49.726420    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.726506    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.726616    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:49.726741    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:49.726890    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:49.726899    3911 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-090000 && echo "ha-090000" | sudo tee /etc/hostname
	I0722 03:55:49.789306    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-090000
	
	I0722 03:55:49.789328    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.789466    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:49.789581    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.789678    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.789776    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:49.789915    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:49.790061    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:49.790072    3911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-090000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-090000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-090000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:55:49.849551    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:55:49.849576    3911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 03:55:49.849589    3911 buildroot.go:174] setting up certificates
	I0722 03:55:49.849598    3911 provision.go:84] configureAuth start
	I0722 03:55:49.849606    3911 main.go:141] libmachine: (ha-090000) Calling .GetMachineName
	I0722 03:55:49.849736    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:49.849829    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.849906    3911 provision.go:143] copyHostCerts
	I0722 03:55:49.849941    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:55:49.850010    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 03:55:49.850019    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:55:49.850190    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 03:55:49.850418    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:55:49.850458    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 03:55:49.850463    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:55:49.850553    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 03:55:49.850707    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:55:49.850746    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 03:55:49.850751    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:55:49.850838    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 03:55:49.850994    3911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.ha-090000 san=[127.0.0.1 192.169.0.5 ha-090000 localhost minikube]
	I0722 03:55:49.954745    3911 provision.go:177] copyRemoteCerts
	I0722 03:55:49.954797    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:55:49.954814    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.954945    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:49.955036    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.955138    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:49.955226    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:49.988017    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 03:55:49.988090    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:55:50.006955    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 03:55:50.007018    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0722 03:55:50.026488    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 03:55:50.026558    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 03:55:50.045921    3911 provision.go:87] duration metric: took 196.3146ms to configureAuth
	I0722 03:55:50.045933    3911 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:55:50.046087    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:50.046101    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:50.046225    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:50.046308    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:50.046401    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.046493    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.046569    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:50.046685    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:50.046803    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:50.046811    3911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:55:50.100376    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:55:50.100387    3911 buildroot.go:70] root file system type: tmpfs
	I0722 03:55:50.100457    3911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:55:50.100468    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:50.100595    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:50.100692    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.100789    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.100888    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:50.101021    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:50.101173    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:50.101220    3911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:55:50.162706    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:55:50.162761    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:50.162891    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:50.162997    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.163099    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.163182    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:50.163329    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:50.163465    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:50.163477    3911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:55:51.839255    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 03:55:51.839270    3911 machine.go:97] duration metric: took 37.176641879s to provisionDockerMachine
	I0722 03:55:51.839283    3911 start.go:293] postStartSetup for "ha-090000" (driver="hyperkit")
	I0722 03:55:51.839300    3911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:55:51.839314    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:51.839490    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:55:51.839510    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:51.839611    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:51.839703    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.839796    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:51.839928    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:51.873857    3911 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:55:51.877062    3911 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:55:51.877075    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 03:55:51.877182    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 03:55:51.877378    3911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 03:55:51.877384    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /etc/ssl/certs/16372.pem
	I0722 03:55:51.877594    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 03:55:51.885692    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:55:51.904673    3911 start.go:296] duration metric: took 65.382263ms for postStartSetup
	I0722 03:55:51.904692    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:51.904859    3911 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 03:55:51.904872    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:51.904961    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:51.905042    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.905118    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:51.905210    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:51.938400    3911 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0722 03:55:51.938461    3911 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0722 03:55:51.992039    3911 fix.go:56] duration metric: took 37.579572847s for fixHost
	I0722 03:55:51.992063    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:51.992208    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:51.992304    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.992398    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.992482    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:51.992602    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:51.992763    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:51.992770    3911 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 03:55:52.046381    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645751.936433832
	
	I0722 03:55:52.046393    3911 fix.go:216] guest clock: 1721645751.936433832
	I0722 03:55:52.046398    3911 fix.go:229] Guest: 2024-07-22 03:55:51.936433832 -0700 PDT Remote: 2024-07-22 03:55:51.992052 -0700 PDT m=+38.026686282 (delta=-55.618168ms)
	I0722 03:55:52.046416    3911 fix.go:200] guest clock delta is within tolerance: -55.618168ms
	I0722 03:55:52.046421    3911 start.go:83] releasing machines lock for "ha-090000", held for 37.633981911s
	I0722 03:55:52.046442    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.046575    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:52.046677    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.046990    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.047122    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.047226    3911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:55:52.047248    3911 ssh_runner.go:195] Run: cat /version.json
	I0722 03:55:52.047259    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:52.047259    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:52.047380    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:52.047396    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:52.047483    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:52.047511    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:52.047561    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:52.047626    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:52.047654    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:52.047720    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:52.075429    3911 ssh_runner.go:195] Run: systemctl --version
	I0722 03:55:52.079894    3911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 03:55:52.124828    3911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:55:52.124898    3911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:55:52.137859    3911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 03:55:52.137870    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:55:52.137970    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:55:52.155379    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:55:52.164198    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:55:52.173115    3911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:55:52.173156    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:55:52.182074    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:55:52.190972    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:55:52.199765    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:55:52.208507    3911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:55:52.217591    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:55:52.226424    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:55:52.235243    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:55:52.244124    3911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:55:52.252099    3911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:55:52.259973    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:52.354629    3911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:55:52.373701    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:55:52.373781    3911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:55:52.386226    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:55:52.407006    3911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:55:52.422442    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:55:52.433467    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:55:52.445302    3911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 03:55:52.465665    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:55:52.477795    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:55:52.493683    3911 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:55:52.496631    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:55:52.503860    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:55:52.517344    3911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:55:52.615407    3911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:55:52.719878    3911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:55:52.719955    3911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:55:52.735170    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:52.840992    3911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:55:55.172776    3911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.331829238s)
	I0722 03:55:55.172846    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 03:55:55.183162    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:55:55.193307    3911 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 03:55:55.284550    3911 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 03:55:55.395161    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:55.503613    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 03:55:55.517310    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:55:55.528594    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:55.620227    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 03:55:55.685036    3911 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 03:55:55.685111    3911 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 03:55:55.689532    3911 start.go:563] Will wait 60s for crictl version
	I0722 03:55:55.689580    3911 ssh_runner.go:195] Run: which crictl
	I0722 03:55:55.692688    3911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 03:55:55.719714    3911 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 03:55:55.719788    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:55:55.737225    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:55:55.780302    3911 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 03:55:55.780349    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:55.780734    3911 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0722 03:55:55.785388    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:55:55.796137    3911 kubeadm.go:883] updating cluster {Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 03:55:55.796229    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:55.796288    3911 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 03:55:55.808589    3911 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0722 03:55:55.808605    3911 docker.go:615] Images already preloaded, skipping extraction
	I0722 03:55:55.808686    3911 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 03:55:55.823528    3911 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0722 03:55:55.823552    3911 cache_images.go:84] Images are preloaded, skipping loading
	I0722 03:55:55.823561    3911 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.3 docker true true} ...
	I0722 03:55:55.823650    3911 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-090000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 03:55:55.823715    3911 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 03:55:55.843770    3911 cni.go:84] Creating CNI manager for ""
	I0722 03:55:55.843782    3911 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 03:55:55.843795    3911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 03:55:55.843811    3911 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-090000 NodeName:ha-090000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 03:55:55.843918    3911 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-090000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 03:55:55.843949    3911 kube-vip.go:115] generating kube-vip config ...
	I0722 03:55:55.843997    3911 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 03:55:55.858984    3911 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 03:55:55.859051    3911 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 03:55:55.859099    3911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 03:55:55.871541    3911 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 03:55:55.871605    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0722 03:55:55.879901    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0722 03:55:55.893317    3911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 03:55:55.906860    3911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0722 03:55:55.920583    3911 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0722 03:55:55.934115    3911 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0722 03:55:55.937202    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:55:55.947512    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:56.043601    3911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:55:56.058460    3911 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000 for IP: 192.169.0.5
	I0722 03:55:56.058473    3911 certs.go:194] generating shared ca certs ...
	I0722 03:55:56.058482    3911 certs.go:226] acquiring lock for ca certs: {Name:mk31b6ba3ba4e51acc59db740baf7c8ba8dd988b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.058655    3911 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key
	I0722 03:55:56.058735    3911 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key
	I0722 03:55:56.058744    3911 certs.go:256] generating profile certs ...
	I0722 03:55:56.058828    3911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key
	I0722 03:55:56.058850    3911 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603
	I0722 03:55:56.058866    3911 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0722 03:55:56.176369    3911 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603 ...
	I0722 03:55:56.176387    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603: {Name:mk56ec66ac2a3d80a126aae24a23c208f41c56a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.176780    3911 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603 ...
	I0722 03:55:56.176790    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603: {Name:mk0da3ff1ed021cd0c62e370f79895aeed00bfd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.177042    3911 certs.go:381] copying /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt
	I0722 03:55:56.177289    3911 certs.go:385] copying /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key
	I0722 03:55:56.177558    3911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key
	I0722 03:55:56.177573    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 03:55:56.177599    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 03:55:56.177621    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 03:55:56.177643    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 03:55:56.177663    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 03:55:56.177684    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 03:55:56.177705    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 03:55:56.177727    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 03:55:56.177832    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem (1338 bytes)
	W0722 03:55:56.177883    3911 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637_empty.pem, impossibly tiny 0 bytes
	I0722 03:55:56.177892    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 03:55:56.177935    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem (1078 bytes)
	I0722 03:55:56.177980    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem (1123 bytes)
	I0722 03:55:56.178009    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem (1675 bytes)
	I0722 03:55:56.178085    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:55:56.178123    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem -> /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.178148    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.178168    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.178610    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 03:55:56.201771    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 03:55:56.234700    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 03:55:56.277028    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 03:55:56.303799    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 03:55:56.355626    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 03:55:56.423367    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 03:55:56.460516    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 03:55:56.495805    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem --> /usr/share/ca-certificates/1637.pem (1338 bytes)
	I0722 03:55:56.523902    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /usr/share/ca-certificates/16372.pem (1708 bytes)
	I0722 03:55:56.561999    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 03:55:56.592542    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 03:55:56.609376    3911 ssh_runner.go:195] Run: openssl version
	I0722 03:55:56.613622    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1637.pem && ln -fs /usr/share/ca-certificates/1637.pem /etc/ssl/certs/1637.pem"
	I0722 03:55:56.622123    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.625637    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:38 /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.625671    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.629816    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1637.pem /etc/ssl/certs/51391683.0"
	I0722 03:55:56.638362    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16372.pem && ln -fs /usr/share/ca-certificates/16372.pem /etc/ssl/certs/16372.pem"
	I0722 03:55:56.646609    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.650063    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:38 /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.650097    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.654257    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16372.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 03:55:56.662670    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 03:55:56.671261    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.674720    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.674754    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.678972    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 03:55:56.687498    3911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 03:55:56.691047    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 03:55:56.695322    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 03:55:56.699702    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 03:55:56.704065    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 03:55:56.708401    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 03:55:56.712852    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 03:55:56.717112    3911 kubeadm.go:392] StartCluster: {Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:55:56.717233    3911 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 03:55:56.730051    3911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 03:55:56.737806    3911 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 03:55:56.737821    3911 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 03:55:56.737861    3911 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 03:55:56.745356    3911 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 03:55:56.745651    3911 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-090000" does not appear in /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:56.745730    3911 kubeconfig.go:62] /Users/jenkins/minikube-integration/19313-1111/kubeconfig needs updating (will repair): [kubeconfig missing "ha-090000" cluster setting kubeconfig missing "ha-090000" context setting]
	I0722 03:55:56.745922    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/kubeconfig: {Name:mkf2b240918cd66dabf425a67d7df0a0c9aa8c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.746572    3911 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:56.746765    3911 kapi.go:59] client config for ha-090000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xc727ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 03:55:56.747076    3911 cert_rotation.go:137] Starting client certificate rotation controller
	I0722 03:55:56.747254    3911 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 03:55:56.754607    3911 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0722 03:55:56.754620    3911 kubeadm.go:597] duration metric: took 16.795414ms to restartPrimaryControlPlane
	I0722 03:55:56.754625    3911 kubeadm.go:394] duration metric: took 37.520322ms to StartCluster
	I0722 03:55:56.754634    3911 settings.go:142] acquiring lock: {Name:mk61cf5b2a74edb35dda57ecbe8abc2ea6c58c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.754711    3911 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:56.755134    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/kubeconfig: {Name:mkf2b240918cd66dabf425a67d7df0a0c9aa8c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.755360    3911 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 03:55:56.755373    3911 start.go:241] waiting for startup goroutines ...
	I0722 03:55:56.755387    3911 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 03:55:56.755497    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:56.799163    3911 out.go:177] * Enabled addons: 
	I0722 03:55:56.820182    3911 addons.go:510] duration metric: took 64.792244ms for enable addons: enabled=[]
	I0722 03:55:56.820230    3911 start.go:246] waiting for cluster config update ...
	I0722 03:55:56.820244    3911 start.go:255] writing updated cluster config ...
	I0722 03:55:56.842189    3911 out.go:177] 
	I0722 03:55:56.863789    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:56.863918    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:56.886431    3911 out.go:177] * Starting "ha-090000-m02" control-plane node in "ha-090000" cluster
	I0722 03:55:56.928353    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:56.928403    3911 cache.go:56] Caching tarball of preloaded images
	I0722 03:55:56.928581    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 03:55:56.928604    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:55:56.928730    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:56.929636    3911 start.go:360] acquireMachinesLock for ha-090000-m02: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:55:56.929748    3911 start.go:364] duration metric: took 80.846µs to acquireMachinesLock for "ha-090000-m02"
	I0722 03:55:56.929773    3911 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:55:56.929782    3911 fix.go:54] fixHost starting: m02
	I0722 03:55:56.930190    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:56.930213    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:56.939208    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51974
	I0722 03:55:56.939548    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:56.939878    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:56.939889    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:56.940129    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:56.940269    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:55:56.940364    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetState
	I0722 03:55:56.940445    3911 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:56.940553    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid from json: 3753
	I0722 03:55:56.941410    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid 3753 missing from process table
	I0722 03:55:56.941430    3911 fix.go:112] recreateIfNeeded on ha-090000-m02: state=Stopped err=<nil>
	I0722 03:55:56.941439    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	W0722 03:55:56.941520    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:55:56.963252    3911 out.go:177] * Restarting existing hyperkit VM for "ha-090000-m02" ...
	I0722 03:55:56.984572    3911 main.go:141] libmachine: (ha-090000-m02) Calling .Start
	I0722 03:55:56.984884    3911 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:56.984972    3911 main.go:141] libmachine: (ha-090000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid
	I0722 03:55:56.986700    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid 3753 missing from process table
	I0722 03:55:56.986715    3911 main.go:141] libmachine: (ha-090000-m02) DBG | pid 3753 is in state "Stopped"
	I0722 03:55:56.986731    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid...
	I0722 03:55:56.987014    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Using UUID a238bb05-e07d-4298-98be-9d336c163b01
	I0722 03:55:57.014110    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Generated MAC 4e:65:fa:f9:26:3
	I0722 03:55:57.014143    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000
	I0722 03:55:57.014261    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a238bb05-e07d-4298-98be-9d336c163b01", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:57.014289    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a238bb05-e07d-4298-98be-9d336c163b01", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:57.014330    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a238bb05-e07d-4298-98be-9d336c163b01", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/ha-090000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machine
s/ha-090000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"}
	I0722 03:55:57.014365    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a238bb05-e07d-4298-98be-9d336c163b01 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/ha-090000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"
	I0722 03:55:57.014400    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 03:55:57.015680    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Pid is 3958
	I0722 03:55:57.016180    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Attempt 0
	I0722 03:55:57.016197    3911 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:57.016259    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid from json: 3958
	I0722 03:55:57.018025    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Searching for 4e:65:fa:f9:26:3 in /var/db/dhcpd_leases ...
	I0722 03:55:57.018041    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0722 03:55:57.018086    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 03:55:57.018095    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 03:55:57.018102    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8bc8}
	I0722 03:55:57.018112    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8b16}
	I0722 03:55:57.018118    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Found match: 4e:65:fa:f9:26:3
	I0722 03:55:57.018122    3911 main.go:141] libmachine: (ha-090000-m02) DBG | IP: 192.169.0.6
	I0722 03:55:57.018178    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetConfigRaw
	I0722 03:55:57.018834    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:55:57.019009    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:57.019499    3911 machine.go:94] provisionDockerMachine start ...
	I0722 03:55:57.019509    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:55:57.019651    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:55:57.019770    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:55:57.019892    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:55:57.020010    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:55:57.020098    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:55:57.020264    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:57.020422    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:55:57.020435    3911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:55:57.023607    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 03:55:57.031862    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 03:55:57.032835    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:57.032848    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:57.032855    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:57.032861    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:57.411442    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 03:55:57.411461    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 03:55:57.526363    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:57.526382    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:57.526390    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:57.526396    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:57.527265    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 03:55:57.527278    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 03:56:02.785857    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 03:56:02.785940    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 03:56:02.785949    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 03:56:02.812798    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 03:56:32.075580    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 03:56:32.075594    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetMachineName
	I0722 03:56:32.075720    3911 buildroot.go:166] provisioning hostname "ha-090000-m02"
	I0722 03:56:32.075731    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetMachineName
	I0722 03:56:32.075826    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.075933    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.076015    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.076119    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.076212    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.076341    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.076492    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.076502    3911 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-090000-m02 && echo "ha-090000-m02" | sudo tee /etc/hostname
	I0722 03:56:32.136897    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-090000-m02
	
	I0722 03:56:32.136912    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.137046    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.137157    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.137250    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.137341    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.137474    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.137607    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.137618    3911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-090000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-090000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-090000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:56:32.192449    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:56:32.192463    3911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 03:56:32.192472    3911 buildroot.go:174] setting up certificates
	I0722 03:56:32.192482    3911 provision.go:84] configureAuth start
	I0722 03:56:32.192492    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetMachineName
	I0722 03:56:32.192621    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:56:32.192721    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.192798    3911 provision.go:143] copyHostCerts
	I0722 03:56:32.192826    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:56:32.192874    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 03:56:32.192879    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:56:32.193015    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 03:56:32.193230    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:56:32.193264    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 03:56:32.193269    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:56:32.193346    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 03:56:32.193513    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:56:32.193541    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 03:56:32.193546    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:56:32.193618    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 03:56:32.193767    3911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.ha-090000-m02 san=[127.0.0.1 192.169.0.6 ha-090000-m02 localhost minikube]
	I0722 03:56:32.314909    3911 provision.go:177] copyRemoteCerts
	I0722 03:56:32.314954    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:56:32.314968    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.315107    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.315208    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.315309    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.315384    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:32.347809    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 03:56:32.347885    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 03:56:32.366931    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 03:56:32.366988    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:56:32.386030    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 03:56:32.386103    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 03:56:32.404971    3911 provision.go:87] duration metric: took 212.48697ms to configureAuth
	I0722 03:56:32.404983    3911 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:56:32.405138    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:32.405152    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:32.405288    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.405375    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.405462    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.405546    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.405633    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.405741    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.405866    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.405874    3911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:56:32.454313    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:56:32.454324    3911 buildroot.go:70] root file system type: tmpfs
	I0722 03:56:32.454404    3911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:56:32.454417    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.454548    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.454656    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.454765    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.454869    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.454989    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.455128    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.455173    3911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:56:32.513991    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:56:32.514007    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.514163    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.514257    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.514355    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.514458    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.514588    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.514721    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.514733    3911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:56:34.211339    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 03:56:34.211353    3911 machine.go:97] duration metric: took 37.192847433s to provisionDockerMachine
	I0722 03:56:34.211364    3911 start.go:293] postStartSetup for "ha-090000-m02" (driver="hyperkit")
	I0722 03:56:34.211371    3911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:56:34.211386    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.211563    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:56:34.211577    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.211687    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.211786    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.211882    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.211969    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:34.242978    3911 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:56:34.245962    3911 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:56:34.245971    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 03:56:34.246060    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 03:56:34.246200    3911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 03:56:34.246206    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /etc/ssl/certs/16372.pem
	I0722 03:56:34.246360    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 03:56:34.254372    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:56:34.273009    3911 start.go:296] duration metric: took 61.631077ms for postStartSetup
	I0722 03:56:34.273028    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.273172    3911 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 03:56:34.273182    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.273265    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.273351    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.273439    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.273519    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:34.305174    3911 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0722 03:56:34.305226    3911 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0722 03:56:34.339922    3911 fix.go:56] duration metric: took 37.411144035s for fixHost
	I0722 03:56:34.339947    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.340082    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.340179    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.340258    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.340343    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.340478    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:34.340622    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:34.340630    3911 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 03:56:34.388578    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645794.572489059
	
	I0722 03:56:34.388591    3911 fix.go:216] guest clock: 1721645794.572489059
	I0722 03:56:34.388596    3911 fix.go:229] Guest: 2024-07-22 03:56:34.572489059 -0700 PDT Remote: 2024-07-22 03:56:34.339936 -0700 PDT m=+80.375710715 (delta=232.553059ms)
	I0722 03:56:34.388606    3911 fix.go:200] guest clock delta is within tolerance: 232.553059ms
	I0722 03:56:34.388609    3911 start.go:83] releasing machines lock for "ha-090000-m02", held for 37.459858552s
	I0722 03:56:34.388627    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.388762    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:56:34.409792    3911 out.go:177] * Found network options:
	I0722 03:56:34.430136    3911 out.go:177]   - NO_PROXY=192.169.0.5
	W0722 03:56:34.451143    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:56:34.451179    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.452017    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.452288    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.452418    3911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:56:34.452457    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	W0722 03:56:34.452511    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:56:34.452619    3911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 03:56:34.452639    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.452667    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.452899    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.452939    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.453127    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.453158    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.453309    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.453305    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:34.453445    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	W0722 03:56:34.481920    3911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:56:34.481981    3911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:56:34.527590    3911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 03:56:34.527602    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:56:34.527664    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:56:34.542920    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:56:34.551387    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:56:34.559553    3911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:56:34.559598    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:56:34.567825    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:56:34.576145    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:56:34.584472    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:56:34.592914    3911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:56:34.601360    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:56:34.609666    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:56:34.618581    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:56:34.626849    3911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:56:34.634297    3911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:56:34.642011    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:34.733806    3911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:56:34.753393    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:56:34.753463    3911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:56:34.769228    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:56:34.781756    3911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:56:34.797930    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:56:34.808316    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:56:34.818407    3911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 03:56:34.839910    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:56:34.852187    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:56:34.867777    3911 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:56:34.870845    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:56:34.878342    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:56:34.891766    3911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:56:34.986612    3911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:56:35.092574    3911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:56:35.092596    3911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:56:35.106385    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:35.202045    3911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:56:37.547949    3911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.345948624s)
	I0722 03:56:37.548007    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 03:56:37.559709    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:56:37.570592    3911 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 03:56:37.669571    3911 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 03:56:37.763201    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:37.875925    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 03:56:37.889982    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:56:37.900245    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:38.003656    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 03:56:38.067963    3911 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 03:56:38.068036    3911 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 03:56:38.072622    3911 start.go:563] Will wait 60s for crictl version
	I0722 03:56:38.072673    3911 ssh_runner.go:195] Run: which crictl
	I0722 03:56:38.075745    3911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 03:56:38.103382    3911 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 03:56:38.103467    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:56:38.119903    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:56:38.160816    3911 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 03:56:38.182482    3911 out.go:177]   - env NO_PROXY=192.169.0.5
	I0722 03:56:38.203478    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:56:38.203850    3911 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0722 03:56:38.207987    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:56:38.217642    3911 mustload.go:65] Loading cluster: ha-090000
	I0722 03:56:38.217804    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:38.218020    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:56:38.218035    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:56:38.226637    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51996
	I0722 03:56:38.226983    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:56:38.227325    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:56:38.227343    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:56:38.227630    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:56:38.227748    3911 main.go:141] libmachine: (ha-090000) Calling .GetState
	I0722 03:56:38.227836    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:38.227899    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3926
	I0722 03:56:38.228835    3911 host.go:66] Checking if "ha-090000" exists ...
	I0722 03:56:38.229086    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:56:38.229101    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:56:38.237412    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51998
	I0722 03:56:38.237753    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:56:38.238100    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:56:38.238118    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:56:38.238328    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:56:38.238453    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:56:38.238565    3911 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000 for IP: 192.169.0.6
	I0722 03:56:38.238571    3911 certs.go:194] generating shared ca certs ...
	I0722 03:56:38.238580    3911 certs.go:226] acquiring lock for ca certs: {Name:mk31b6ba3ba4e51acc59db740baf7c8ba8dd988b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:56:38.238710    3911 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key
	I0722 03:56:38.238765    3911 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key
	I0722 03:56:38.238773    3911 certs.go:256] generating profile certs ...
	I0722 03:56:38.238865    3911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key
	I0722 03:56:38.238954    3911 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.cd5997a2
	I0722 03:56:38.239013    3911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key
	I0722 03:56:38.239026    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 03:56:38.239049    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 03:56:38.239069    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 03:56:38.239087    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 03:56:38.239104    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 03:56:38.239123    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 03:56:38.239143    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 03:56:38.239166    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 03:56:38.239250    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem (1338 bytes)
	W0722 03:56:38.239289    3911 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637_empty.pem, impossibly tiny 0 bytes
	I0722 03:56:38.239297    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 03:56:38.239330    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem (1078 bytes)
	I0722 03:56:38.239361    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem (1123 bytes)
	I0722 03:56:38.239392    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem (1675 bytes)
	I0722 03:56:38.239457    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:56:38.239492    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem -> /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.239513    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.239532    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.239558    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:56:38.239660    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:56:38.239755    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:56:38.239850    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:56:38.239942    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:56:38.265993    3911 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.pub
	I0722 03:56:38.269678    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 03:56:38.278304    3911 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/sa.key
	I0722 03:56:38.281402    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0722 03:56:38.289616    3911 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 03:56:38.292667    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 03:56:38.300512    3911 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/front-proxy-ca.key
	I0722 03:56:38.303570    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0722 03:56:38.311600    3911 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.crt
	I0722 03:56:38.314768    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 03:56:38.322792    3911 ssh_runner.go:195] Run: stat -c %s /var/lib/minikube/certs/etcd/ca.key
	I0722 03:56:38.325989    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0722 03:56:38.334090    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 03:56:38.354251    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 03:56:38.373942    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 03:56:38.393826    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 03:56:38.413300    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 03:56:38.433234    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 03:56:38.452691    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 03:56:38.472206    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 03:56:38.492624    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem --> /usr/share/ca-certificates/1637.pem (1338 bytes)
	I0722 03:56:38.511779    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /usr/share/ca-certificates/16372.pem (1708 bytes)
	I0722 03:56:38.531604    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 03:56:38.550960    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 03:56:38.564536    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0722 03:56:38.577906    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 03:56:38.591620    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0722 03:56:38.605203    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 03:56:38.619039    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0722 03:56:38.633179    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 03:56:38.646763    3911 ssh_runner.go:195] Run: openssl version
	I0722 03:56:38.650909    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16372.pem && ln -fs /usr/share/ca-certificates/16372.pem /etc/ssl/certs/16372.pem"
	I0722 03:56:38.659202    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.662546    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:38 /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.662579    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.666667    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16372.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 03:56:38.675008    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 03:56:38.683335    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.686876    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.686923    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.691071    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 03:56:38.699373    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1637.pem && ln -fs /usr/share/ca-certificates/1637.pem /etc/ssl/certs/1637.pem"
	I0722 03:56:38.707510    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.710890    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:38 /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.710923    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.715062    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1637.pem /etc/ssl/certs/51391683.0"
	I0722 03:56:38.723255    3911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 03:56:38.726701    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 03:56:38.730990    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 03:56:38.735283    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 03:56:38.739568    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 03:56:38.743725    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 03:56:38.747941    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 03:56:38.752113    3911 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.3 docker true true} ...
	I0722 03:56:38.752169    3911 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-090000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 03:56:38.752183    3911 kube-vip.go:115] generating kube-vip config ...
	I0722 03:56:38.752213    3911 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 03:56:38.764297    3911 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 03:56:38.764339    3911 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 03:56:38.764386    3911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 03:56:38.777566    3911 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 03:56:38.777617    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 03:56:38.785844    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0722 03:56:38.799378    3911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 03:56:38.812569    3911 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0722 03:56:38.826035    3911 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0722 03:56:38.829004    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:56:38.838894    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:38.934878    3911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:56:38.949889    3911 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 03:56:38.950085    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:38.971273    3911 out.go:177] * Verifying Kubernetes components...
	I0722 03:56:38.991992    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:39.123554    3911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:56:39.136167    3911 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:56:39.136377    3911 kapi.go:59] client config for ha-090000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xc727ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 03:56:39.136421    3911 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0722 03:56:39.136590    3911 node_ready.go:35] waiting up to 6m0s for node "ha-090000-m02" to be "Ready" ...
	I0722 03:56:39.136660    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:39.136665    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:39.136672    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:39.136677    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:40.137255    3911 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0722 03:56:40.137479    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:40.137503    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:40.137521    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:40.137534    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:47.940026    3911 round_trippers.go:574] Response Status: 200 OK in 7802 milliseconds
	I0722 03:56:47.940733    3911 node_ready.go:49] node "ha-090000-m02" has status "Ready":"True"
	I0722 03:56:47.940746    3911 node_ready.go:38] duration metric: took 8.804377648s for node "ha-090000-m02" to be "Ready" ...
	I0722 03:56:47.940753    3911 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:56:47.940808    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:47.940815    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:47.940823    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:47.940827    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.019911    3911 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I0722 03:56:48.026784    3911 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lf5mv" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.026849    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lf5mv
	I0722 03:56:48.026855    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.026862    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.026866    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.031605    3911 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 03:56:48.032135    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.032143    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.032150    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.032153    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.034575    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.034884    3911 pod_ready.go:92] pod "coredns-7db6d8ff4d-lf5mv" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.034894    3911 pod_ready.go:81] duration metric: took 8.095254ms for pod "coredns-7db6d8ff4d-lf5mv" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.034902    3911 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mjc97" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.034940    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mjc97
	I0722 03:56:48.034951    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.034959    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.034963    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.037811    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.038390    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.038397    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.038403    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.038412    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.042255    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:48.042713    3911 pod_ready.go:92] pod "coredns-7db6d8ff4d-mjc97" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.042723    3911 pod_ready.go:81] duration metric: took 7.815334ms for pod "coredns-7db6d8ff4d-mjc97" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.042730    3911 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.042769    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-090000
	I0722 03:56:48.042774    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.042780    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.042784    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.046998    3911 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 03:56:48.047505    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.047512    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.047517    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.047519    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.050594    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:48.051034    3911 pod_ready.go:92] pod "etcd-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.051045    3911 pod_ready.go:81] duration metric: took 8.309873ms for pod "etcd-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.051052    3911 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.051096    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-090000-m02
	I0722 03:56:48.051102    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.051108    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.051112    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.055364    3911 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 03:56:48.055818    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:48.055827    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.055833    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.055837    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.058858    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:48.059331    3911 pod_ready.go:92] pod "etcd-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.059342    3911 pod_ready.go:81] duration metric: took 8.283096ms for pod "etcd-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.059349    3911 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.059399    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-090000-m03
	I0722 03:56:48.059405    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.059412    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.059415    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.069366    3911 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 03:56:48.140952    3911 request.go:629] Waited for 71.140962ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:48.140996    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:48.141001    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.141007    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.141013    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.150505    3911 round_trippers.go:574] Response Status: 404 Not Found in 9 milliseconds
	I0722 03:56:48.150672    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "etcd-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:48.150684    3911 pod_ready.go:81] duration metric: took 91.332094ms for pod "etcd-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:48.150693    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "etcd-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:48.150707    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.341259    3911 request.go:629] Waited for 190.473586ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000
	I0722 03:56:48.341296    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000
	I0722 03:56:48.341301    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.341307    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.341311    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.346534    3911 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 03:56:48.541247    3911 request.go:629] Waited for 194.341501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.541301    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.541310    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.541317    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.541321    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.543864    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.544294    3911 pod_ready.go:92] pod "kube-apiserver-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.544304    3911 pod_ready.go:81] duration metric: took 393.600781ms for pod "kube-apiserver-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.544310    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.740936    3911 request.go:629] Waited for 196.590173ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m02
	I0722 03:56:48.741009    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m02
	I0722 03:56:48.741017    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.741025    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.741032    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.743601    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.941584    3911 request.go:629] Waited for 197.554429ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:48.941670    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:48.941676    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.941681    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.941685    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.943442    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:48.943712    3911 pod_ready.go:92] pod "kube-apiserver-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.943722    3911 pod_ready.go:81] duration metric: took 399.417249ms for pod "kube-apiserver-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.943728    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.142238    3911 request.go:629] Waited for 198.455178ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m03
	I0722 03:56:49.142276    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m03
	I0722 03:56:49.142283    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.142291    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.142297    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.144759    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:49.341711    3911 request.go:629] Waited for 196.420201ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:49.341743    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:49.341748    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.341754    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.341757    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.343407    3911 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0722 03:56:49.343465    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:49.343477    3911 pod_ready.go:81] duration metric: took 399.754899ms for pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:49.343485    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:49.343492    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.540820    3911 request.go:629] Waited for 197.295627ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000
	I0722 03:56:49.540859    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000
	I0722 03:56:49.540864    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.540873    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.540889    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.542752    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:49.741810    3911 request.go:629] Waited for 198.496804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:49.741941    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:49.741953    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.741965    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.741971    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.745200    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:49.746626    3911 pod_ready.go:92] pod "kube-controller-manager-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:49.746670    3911 pod_ready.go:81] duration metric: took 403.181202ms for pod "kube-controller-manager-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.746679    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.942498    3911 request.go:629] Waited for 195.70501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m02
	I0722 03:56:49.942556    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m02
	I0722 03:56:49.942566    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.942576    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.942583    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.945821    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:50.141728    3911 request.go:629] Waited for 194.653258ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:50.141778    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:50.141788    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.141874    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.141884    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.144857    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:50.145401    3911 pod_ready.go:92] pod "kube-controller-manager-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:50.145413    3911 pod_ready.go:81] duration metric: took 398.731517ms for pod "kube-controller-manager-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.145421    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.342252    3911 request.go:629] Waited for 196.790992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m03
	I0722 03:56:50.342380    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m03
	I0722 03:56:50.342391    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.342402    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.342409    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.345338    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:50.541942    3911 request.go:629] Waited for 196.02759ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:50.542016    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:50.542024    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.542030    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.542035    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.543861    3911 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0722 03:56:50.543979    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:50.543991    3911 pod_ready.go:81] duration metric: took 398.575179ms for pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:50.543999    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:50.544007    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f92w" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.741981    3911 request.go:629] Waited for 197.931605ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f92w
	I0722 03:56:50.742035    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f92w
	I0722 03:56:50.742108    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.742123    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.742139    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.745292    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:50.941201    3911 request.go:629] Waited for 195.378005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m04
	I0722 03:56:50.941242    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m04
	I0722 03:56:50.941250    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.941279    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.941285    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.943392    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:50.943959    3911 pod_ready.go:92] pod "kube-proxy-8f92w" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:50.943968    3911 pod_ready.go:81] duration metric: took 399.965093ms for pod "kube-proxy-8f92w" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.943975    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8wl7h" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.141802    3911 request.go:629] Waited for 197.795735ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wl7h
	I0722 03:56:51.141881    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wl7h
	I0722 03:56:51.141889    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.141897    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.141901    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.144430    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:51.341886    3911 request.go:629] Waited for 196.964343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:51.341949    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:51.342008    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.342021    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.342042    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.345071    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:51.345563    3911 pod_ready.go:92] pod "kube-proxy-8wl7h" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:51.345575    3911 pod_ready.go:81] duration metric: took 401.60562ms for pod "kube-proxy-8wl7h" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.345584    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5kg7" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.541992    3911 request.go:629] Waited for 196.373771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5kg7
	I0722 03:56:51.542055    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5kg7
	I0722 03:56:51.542062    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.542069    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.542073    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.544061    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:51.741851    3911 request.go:629] Waited for 197.301001ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:51.741903    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:51.741920    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.741972    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.741981    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.744924    3911 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0722 03:56:51.745061    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-proxy-s5kg7" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:51.745083    3911 pod_ready.go:81] duration metric: took 399.503782ms for pod "kube-proxy-s5kg7" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:51.745093    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-proxy-s5kg7" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:51.745099    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xzpdq" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.942237    3911 request.go:629] Waited for 197.092533ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzpdq
	I0722 03:56:51.942331    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzpdq
	I0722 03:56:51.942339    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.942348    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.942352    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.944379    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:52.140792    3911 request.go:629] Waited for 195.988207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.140891    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.140898    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.140905    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.140908    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.143865    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:52.144152    3911 pod_ready.go:92] pod "kube-proxy-xzpdq" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:52.144162    3911 pod_ready.go:81] duration metric: took 399.065856ms for pod "kube-proxy-xzpdq" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.144174    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.341088    3911 request.go:629] Waited for 196.884909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000
	I0722 03:56:52.341120    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000
	I0722 03:56:52.341125    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.341131    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.341158    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.342922    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:52.541268    3911 request.go:629] Waited for 197.724279ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.541331    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.541336    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.541343    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.541348    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.543046    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:52.543447    3911 pod_ready.go:92] pod "kube-scheduler-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:52.543457    3911 pod_ready.go:81] duration metric: took 399.28772ms for pod "kube-scheduler-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.543466    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.741611    3911 request.go:629] Waited for 198.11239ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m02
	I0722 03:56:52.741678    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m02
	I0722 03:56:52.741684    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.741690    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.741694    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.743685    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:52.941884    3911 request.go:629] Waited for 197.596709ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:52.941966    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:52.941974    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.941983    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.941990    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.944672    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:52.944946    3911 pod_ready.go:92] pod "kube-scheduler-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:52.944957    3911 pod_ready.go:81] duration metric: took 401.495544ms for pod "kube-scheduler-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.944964    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:53.140781    3911 request.go:629] Waited for 195.779713ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m03
	I0722 03:56:53.140822    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m03
	I0722 03:56:53.140828    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.140846    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.140857    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.143259    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:53.340903    3911 request.go:629] Waited for 197.282616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:53.341040    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:53.341054    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.341066    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.341072    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.343900    3911 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0722 03:56:53.344052    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:53.344080    3911 pod_ready.go:81] duration metric: took 399.121362ms for pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:53.344087    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:53.344093    3911 pod_ready.go:38] duration metric: took 5.403478999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:56:53.344113    3911 api_server.go:52] waiting for apiserver process to appear ...
	I0722 03:56:53.344169    3911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 03:56:53.355872    3911 api_server.go:72] duration metric: took 14.406346458s to wait for apiserver process to appear ...
	I0722 03:56:53.355884    3911 api_server.go:88] waiting for apiserver healthz status ...
	I0722 03:56:53.355903    3911 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0722 03:56:53.360168    3911 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0722 03:56:53.360204    3911 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0722 03:56:53.360209    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.360215    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.360219    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.360847    3911 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0722 03:56:53.360928    3911 api_server.go:141] control plane version: v1.30.3
	I0722 03:56:53.360938    3911 api_server.go:131] duration metric: took 5.049309ms to wait for apiserver health ...
	I0722 03:56:53.360953    3911 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 03:56:53.540855    3911 request.go:629] Waited for 179.859471ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.540957    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.540968    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.540979    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.540985    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.546462    3911 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 03:56:53.551792    3911 system_pods.go:59] 26 kube-system pods found
	I0722 03:56:53.551807    3911 system_pods.go:61] "coredns-7db6d8ff4d-lf5mv" [cd051db1-dcbb-4fee-85d9-be13d1be38ec] Running
	I0722 03:56:53.551813    3911 system_pods.go:61] "coredns-7db6d8ff4d-mjc97" [ac1f1032-14ce-4c0c-b95b-a86bd4ef7810] Running
	I0722 03:56:53.551817    3911 system_pods.go:61] "etcd-ha-090000" [ec0787c7-a5cb-4375-b6c7-04e80160dbd9] Running
	I0722 03:56:53.551820    3911 system_pods.go:61] "etcd-ha-090000-m02" [70e6e1d6-208c-45b6-ad64-c10be5faedbb] Running
	I0722 03:56:53.551823    3911 system_pods.go:61] "etcd-ha-090000-m03" [ed74b70b-4483-4ac9-9db2-5c1507439fbf] Running
	I0722 03:56:53.551830    3911 system_pods.go:61] "kindnet-kqb2r" [58565238-777a-421f-a15d-38bd5daf596e] Running
	I0722 03:56:53.551834    3911 system_pods.go:61] "kindnet-lf6b4" [aadac04f-abbe-481b-accf-df0991b98748] Running
	I0722 03:56:53.551836    3911 system_pods.go:61] "kindnet-mqxjd" [439b0e4a-14b8-4556-9ae6-6a26590b6d5d] Running
	I0722 03:56:53.551839    3911 system_pods.go:61] "kindnet-xt575" [21e859c8-a102-4b48-ba9d-3b3902be8ba1] Running
	I0722 03:56:53.551842    3911 system_pods.go:61] "kube-apiserver-ha-090000" [c0377564-cef8-4807-8ab1-3fc6f2607591] Running
	I0722 03:56:53.551844    3911 system_pods.go:61] "kube-apiserver-ha-090000-m02" [87130092-7fea-4cf8-a1b4-b2b853d60334] Running
	I0722 03:56:53.551847    3911 system_pods.go:61] "kube-apiserver-ha-090000-m03" [056a2588-da71-4189-93cd-10a92f10d8d4] Running
	I0722 03:56:53.551850    3911 system_pods.go:61] "kube-controller-manager-ha-090000" [89cfb4c4-8d84-42f2-bae3-3962aada627b] Running
	I0722 03:56:53.551853    3911 system_pods.go:61] "kube-controller-manager-ha-090000-m02" [9173940b-a550-4f67-b37c-78e456b18a13] Running
	I0722 03:56:53.551855    3911 system_pods.go:61] "kube-controller-manager-ha-090000-m03" [75846dcb-f9d9-46c6-8eaa-857c3da39b9a] Running
	I0722 03:56:53.551858    3911 system_pods.go:61] "kube-proxy-8f92w" [10da7b52-073d-40c9-87ea-8484d68147e3] Running
	I0722 03:56:53.551861    3911 system_pods.go:61] "kube-proxy-8wl7h" [210fb608-afcf-4f5c-9b75-cc949c268854] Running
	I0722 03:56:53.551864    3911 system_pods.go:61] "kube-proxy-s5kg7" [8513335b-221c-4602-9aaa-b1e85b828bb4] Running
	I0722 03:56:53.551866    3911 system_pods.go:61] "kube-proxy-xzpdq" [d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7] Running
	I0722 03:56:53.551869    3911 system_pods.go:61] "kube-scheduler-ha-090000" [82031515-de24-4248-97ff-2bb892974db3] Running
	I0722 03:56:53.551872    3911 system_pods.go:61] "kube-scheduler-ha-090000-m02" [2f042e46-2b51-4b25-b94a-c22dde65c7fa] Running
	I0722 03:56:53.551874    3911 system_pods.go:61] "kube-scheduler-ha-090000-m03" [bf7cca91-4911-4f81-bde0-cbb089bd2fd2] Running
	I0722 03:56:53.551877    3911 system_pods.go:61] "kube-vip-ha-090000" [46ed0197-35a7-40cd-8480-0e66a09d4d69] Running
	I0722 03:56:53.551880    3911 system_pods.go:61] "kube-vip-ha-090000-m02" [b6025cfc-c08e-4981-b1b6-4f26ba5d5538] Running
	I0722 03:56:53.551882    3911 system_pods.go:61] "kube-vip-ha-090000-m03" [e7bc337b-5f22-4c55-86cb-1417b15343bd] Running
	I0722 03:56:53.551885    3911 system_pods.go:61] "storage-provisioner" [c1214845-bf0e-4808-9e11-faf18dd3cb3f] Running
	I0722 03:56:53.551889    3911 system_pods.go:74] duration metric: took 190.935916ms to wait for pod list to return data ...
	I0722 03:56:53.551895    3911 default_sa.go:34] waiting for default service account to be created ...
	I0722 03:56:53.741633    3911 request.go:629] Waited for 189.696516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0722 03:56:53.741686    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0722 03:56:53.741703    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.741714    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.741724    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.744889    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:53.745045    3911 default_sa.go:45] found service account: "default"
	I0722 03:56:53.745059    3911 default_sa.go:55] duration metric: took 193.164449ms for default service account to be created ...
	I0722 03:56:53.745066    3911 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 03:56:53.941905    3911 request.go:629] Waited for 196.736167ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.941953    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.941965    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.941979    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.941986    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.947853    3911 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 03:56:53.953138    3911 system_pods.go:86] 26 kube-system pods found
	I0722 03:56:53.953150    3911 system_pods.go:89] "coredns-7db6d8ff4d-lf5mv" [cd051db1-dcbb-4fee-85d9-be13d1be38ec] Running
	I0722 03:56:53.953154    3911 system_pods.go:89] "coredns-7db6d8ff4d-mjc97" [ac1f1032-14ce-4c0c-b95b-a86bd4ef7810] Running
	I0722 03:56:53.953158    3911 system_pods.go:89] "etcd-ha-090000" [ec0787c7-a5cb-4375-b6c7-04e80160dbd9] Running
	I0722 03:56:53.953161    3911 system_pods.go:89] "etcd-ha-090000-m02" [70e6e1d6-208c-45b6-ad64-c10be5faedbb] Running
	I0722 03:56:53.953164    3911 system_pods.go:89] "etcd-ha-090000-m03" [ed74b70b-4483-4ac9-9db2-5c1507439fbf] Running
	I0722 03:56:53.953167    3911 system_pods.go:89] "kindnet-kqb2r" [58565238-777a-421f-a15d-38bd5daf596e] Running
	I0722 03:56:53.953171    3911 system_pods.go:89] "kindnet-lf6b4" [aadac04f-abbe-481b-accf-df0991b98748] Running
	I0722 03:56:53.953174    3911 system_pods.go:89] "kindnet-mqxjd" [439b0e4a-14b8-4556-9ae6-6a26590b6d5d] Running
	I0722 03:56:53.953176    3911 system_pods.go:89] "kindnet-xt575" [21e859c8-a102-4b48-ba9d-3b3902be8ba1] Running
	I0722 03:56:53.953179    3911 system_pods.go:89] "kube-apiserver-ha-090000" [c0377564-cef8-4807-8ab1-3fc6f2607591] Running
	I0722 03:56:53.953182    3911 system_pods.go:89] "kube-apiserver-ha-090000-m02" [87130092-7fea-4cf8-a1b4-b2b853d60334] Running
	I0722 03:56:53.953185    3911 system_pods.go:89] "kube-apiserver-ha-090000-m03" [056a2588-da71-4189-93cd-10a92f10d8d4] Running
	I0722 03:56:53.953189    3911 system_pods.go:89] "kube-controller-manager-ha-090000" [89cfb4c4-8d84-42f2-bae3-3962aada627b] Running
	I0722 03:56:53.953192    3911 system_pods.go:89] "kube-controller-manager-ha-090000-m02" [9173940b-a550-4f67-b37c-78e456b18a13] Running
	I0722 03:56:53.953195    3911 system_pods.go:89] "kube-controller-manager-ha-090000-m03" [75846dcb-f9d9-46c6-8eaa-857c3da39b9a] Running
	I0722 03:56:53.953199    3911 system_pods.go:89] "kube-proxy-8f92w" [10da7b52-073d-40c9-87ea-8484d68147e3] Running
	I0722 03:56:53.953203    3911 system_pods.go:89] "kube-proxy-8wl7h" [210fb608-afcf-4f5c-9b75-cc949c268854] Running
	I0722 03:56:53.953206    3911 system_pods.go:89] "kube-proxy-s5kg7" [8513335b-221c-4602-9aaa-b1e85b828bb4] Running
	I0722 03:56:53.953209    3911 system_pods.go:89] "kube-proxy-xzpdq" [d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7] Running
	I0722 03:56:53.953214    3911 system_pods.go:89] "kube-scheduler-ha-090000" [82031515-de24-4248-97ff-2bb892974db3] Running
	I0722 03:56:53.953219    3911 system_pods.go:89] "kube-scheduler-ha-090000-m02" [2f042e46-2b51-4b25-b94a-c22dde65c7fa] Running
	I0722 03:56:53.953222    3911 system_pods.go:89] "kube-scheduler-ha-090000-m03" [bf7cca91-4911-4f81-bde0-cbb089bd2fd2] Running
	I0722 03:56:53.953226    3911 system_pods.go:89] "kube-vip-ha-090000" [46ed0197-35a7-40cd-8480-0e66a09d4d69] Running
	I0722 03:56:53.953229    3911 system_pods.go:89] "kube-vip-ha-090000-m02" [b6025cfc-c08e-4981-b1b6-4f26ba5d5538] Running
	I0722 03:56:53.953232    3911 system_pods.go:89] "kube-vip-ha-090000-m03" [e7bc337b-5f22-4c55-86cb-1417b15343bd] Running
	I0722 03:56:53.953235    3911 system_pods.go:89] "storage-provisioner" [c1214845-bf0e-4808-9e11-faf18dd3cb3f] Running
	I0722 03:56:53.953241    3911 system_pods.go:126] duration metric: took 208.1764ms to wait for k8s-apps to be running ...
	I0722 03:56:53.953247    3911 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 03:56:53.953298    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 03:56:53.964081    3911 system_svc.go:56] duration metric: took 10.830617ms WaitForService to wait for kubelet
	I0722 03:56:53.964094    3911 kubeadm.go:582] duration metric: took 15.014585328s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 03:56:53.964109    3911 node_conditions.go:102] verifying NodePressure condition ...
	I0722 03:56:54.141596    3911 request.go:629] Waited for 177.455634ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0722 03:56:54.141627    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0722 03:56:54.141632    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:54.141645    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:54.141650    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:54.156645    3911 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0722 03:56:54.157279    3911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:56:54.157291    3911 node_conditions.go:123] node cpu capacity is 2
	I0722 03:56:54.157302    3911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:56:54.157305    3911 node_conditions.go:123] node cpu capacity is 2
	I0722 03:56:54.157309    3911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:56:54.157315    3911 node_conditions.go:123] node cpu capacity is 2
	I0722 03:56:54.157319    3911 node_conditions.go:105] duration metric: took 193.210914ms to run NodePressure ...
	I0722 03:56:54.157327    3911 start.go:241] waiting for startup goroutines ...
	I0722 03:56:54.157344    3911 start.go:255] writing updated cluster config ...
	I0722 03:56:54.178247    3911 out.go:177] 
	I0722 03:56:54.215301    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:54.215427    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:56:54.237875    3911 out.go:177] * Starting "ha-090000-m04" worker node in "ha-090000" cluster
	I0722 03:56:54.313643    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:56:54.313672    3911 cache.go:56] Caching tarball of preloaded images
	I0722 03:56:54.313891    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 03:56:54.313909    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:56:54.314031    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:56:54.314743    3911 start.go:360] acquireMachinesLock for ha-090000-m04: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:56:54.314865    3911 start.go:364] duration metric: took 97.548µs to acquireMachinesLock for "ha-090000-m04"
	I0722 03:56:54.314900    3911 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:56:54.314909    3911 fix.go:54] fixHost starting: m04
	I0722 03:56:54.315362    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:56:54.315392    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:56:54.324846    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52004
	I0722 03:56:54.325299    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:56:54.325696    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:56:54.325717    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:56:54.325994    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:56:54.326143    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:56:54.326258    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetState
	I0722 03:56:54.326348    3911 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:54.326459    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid from json: 3802
	I0722 03:56:54.327677    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid 3802 missing from process table
	I0722 03:56:54.327712    3911 fix.go:112] recreateIfNeeded on ha-090000-m04: state=Stopped err=<nil>
	I0722 03:56:54.327724    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	W0722 03:56:54.327832    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:56:54.347991    3911 out.go:177] * Restarting existing hyperkit VM for "ha-090000-m04" ...
	I0722 03:56:54.405790    3911 main.go:141] libmachine: (ha-090000-m04) Calling .Start
	I0722 03:56:54.406014    3911 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:54.406069    3911 main.go:141] libmachine: (ha-090000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid
	I0722 03:56:54.407060    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid 3802 missing from process table
	I0722 03:56:54.407069    3911 main.go:141] libmachine: (ha-090000-m04) DBG | pid 3802 is in state "Stopped"
	I0722 03:56:54.407087    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid...
	I0722 03:56:54.407246    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Using UUID f13599ad-3762-43bd-a5c6-6cfffb7afaca
	I0722 03:56:54.437806    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Generated MAC ca:7d:32:d9:5d:55
	I0722 03:56:54.437841    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000
	I0722 03:56:54.437986    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f13599ad-3762-43bd-a5c6-6cfffb7afaca", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:56:54.438025    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f13599ad-3762-43bd-a5c6-6cfffb7afaca", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:56:54.438089    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f13599ad-3762-43bd-a5c6-6cfffb7afaca", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/ha-090000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machine
s/ha-090000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"}
	I0722 03:56:54.438135    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f13599ad-3762-43bd-a5c6-6cfffb7afaca -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/ha-090000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"
	I0722 03:56:54.438159    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 03:56:54.439735    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Pid is 3973
	I0722 03:56:54.440437    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Attempt 0
	I0722 03:56:54.440473    3911 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:54.440546    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid from json: 3973
	I0722 03:56:54.443188    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Searching for ca:7d:32:d9:5d:55 in /var/db/dhcpd_leases ...
	I0722 03:56:54.443309    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0722 03:56:54.443345    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8c45}
	I0722 03:56:54.443358    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 03:56:54.443395    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 03:56:54.443440    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8bc8}
	I0722 03:56:54.443458    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Found match: ca:7d:32:d9:5d:55
	I0722 03:56:54.443482    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetConfigRaw
	I0722 03:56:54.443506    3911 main.go:141] libmachine: (ha-090000-m04) DBG | IP: 192.169.0.8
	I0722 03:56:54.444347    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:56:54.444653    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:56:54.445364    3911 machine.go:94] provisionDockerMachine start ...
	I0722 03:56:54.445380    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:56:54.445624    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:56:54.445766    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:56:54.445925    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:56:54.446085    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:56:54.446269    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:56:54.446478    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:54.446750    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:56:54.446762    3911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:56:54.450021    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 03:56:54.474479    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 03:56:54.475620    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:56:54.475643    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:56:54.475657    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:56:54.475667    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:56:54.866202    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 03:56:54.866218    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 03:56:54.981166    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:56:54.981182    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:56:54.981189    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:56:54.981195    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:56:54.982030    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 03:56:54.982040    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 03:57:00.347122    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 03:57:00.347199    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 03:57:00.347212    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 03:57:00.370939    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 03:57:29.507146    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 03:57:29.507164    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetMachineName
	I0722 03:57:29.507326    3911 buildroot.go:166] provisioning hostname "ha-090000-m04"
	I0722 03:57:29.507337    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetMachineName
	I0722 03:57:29.507436    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.507532    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.507631    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.507730    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.507816    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.507942    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.508105    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.508119    3911 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-090000-m04 && echo "ha-090000-m04" | sudo tee /etc/hostname
	I0722 03:57:29.566504    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-090000-m04
	
	I0722 03:57:29.566520    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.566676    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.566768    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.566861    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.566958    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.567095    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.567238    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.567250    3911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-090000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-090000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-090000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:57:29.622448    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:57:29.622463    3911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 03:57:29.622472    3911 buildroot.go:174] setting up certificates
	I0722 03:57:29.622479    3911 provision.go:84] configureAuth start
	I0722 03:57:29.622486    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetMachineName
	I0722 03:57:29.622644    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:57:29.622751    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.622856    3911 provision.go:143] copyHostCerts
	I0722 03:57:29.622886    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:57:29.622945    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 03:57:29.622952    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:57:29.623163    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 03:57:29.623368    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:57:29.623410    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 03:57:29.623415    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:57:29.623495    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 03:57:29.623640    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:57:29.623679    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 03:57:29.623684    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:57:29.623770    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 03:57:29.623918    3911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.ha-090000-m04 san=[127.0.0.1 192.169.0.8 ha-090000-m04 localhost minikube]
	I0722 03:57:29.798481    3911 provision.go:177] copyRemoteCerts
	I0722 03:57:29.798536    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:57:29.798553    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.798720    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.798832    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.798934    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.799034    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:29.828994    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 03:57:29.829071    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:57:29.849145    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 03:57:29.849216    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 03:57:29.868964    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 03:57:29.869035    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 03:57:29.889770    3911 provision.go:87] duration metric: took 267.289907ms to configureAuth
	I0722 03:57:29.889784    3911 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:57:29.889952    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:57:29.889967    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:29.890101    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.890199    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.890275    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.890367    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.890452    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.890562    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.890690    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.890698    3911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:57:29.941114    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:57:29.941126    3911 buildroot.go:70] root file system type: tmpfs
	I0722 03:57:29.941203    3911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:57:29.941214    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.941336    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.941424    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.941505    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.941596    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.941717    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.941859    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.941908    3911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:57:29.999626    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:57:29.999643    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.999785    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.999874    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.999968    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:30.000060    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:30.000202    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:30.000354    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:30.000367    3911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:57:31.614623    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 03:57:31.614645    3911 machine.go:97] duration metric: took 37.170271356s to provisionDockerMachine
	I0722 03:57:31.614654    3911 start.go:293] postStartSetup for "ha-090000-m04" (driver="hyperkit")
	I0722 03:57:31.614661    3911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:57:31.614672    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.614863    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:57:31.614878    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.614977    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.615074    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.615173    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.615258    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:31.646689    3911 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:57:31.649952    3911 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:57:31.649963    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 03:57:31.650063    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 03:57:31.650246    3911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 03:57:31.650252    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /etc/ssl/certs/16372.pem
	I0722 03:57:31.650455    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 03:57:31.658413    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:57:31.678576    3911 start.go:296] duration metric: took 63.915273ms for postStartSetup
	I0722 03:57:31.678597    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.678768    3911 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 03:57:31.678782    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.678870    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.678960    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.679037    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.679115    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:31.710161    3911 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0722 03:57:31.710221    3911 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0722 03:57:31.764084    3911 fix.go:56] duration metric: took 37.450180093s for fixHost
	I0722 03:57:31.764110    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.764259    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.764351    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.764456    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.764557    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.764680    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:31.764822    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:31.764829    3911 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 03:57:31.816488    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645852.004461870
	
	I0722 03:57:31.816502    3911 fix.go:216] guest clock: 1721645852.004461870
	I0722 03:57:31.816508    3911 fix.go:229] Guest: 2024-07-22 03:57:32.00446187 -0700 PDT Remote: 2024-07-22 03:57:31.764099 -0700 PDT m=+137.801419594 (delta=240.36287ms)
	I0722 03:57:31.816522    3911 fix.go:200] guest clock delta is within tolerance: 240.36287ms
	I0722 03:57:31.816527    3911 start.go:83] releasing machines lock for "ha-090000-m04", held for 37.50265184s
	I0722 03:57:31.816545    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.816680    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:57:31.839252    3911 out.go:177] * Found network options:
	I0722 03:57:31.860719    3911 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0722 03:57:31.881811    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 03:57:31.881829    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:57:31.881843    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.882321    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.882463    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.882549    3911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:57:31.882589    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	W0722 03:57:31.882613    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 03:57:31.882631    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:57:31.882716    3911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 03:57:31.882718    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.882733    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.882836    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.882856    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.882964    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.883010    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.883091    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.883141    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:31.883196    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	W0722 03:57:31.910458    3911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:57:31.910515    3911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:57:31.960457    3911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 03:57:31.960475    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:57:31.960567    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:57:31.976097    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:57:31.984637    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:57:31.992923    3911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:57:31.992964    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:57:32.001492    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:57:32.009758    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:57:32.018152    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:57:32.026574    3911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:57:32.034947    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:57:32.043182    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:57:32.051485    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:57:32.059820    3911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:57:32.067251    3911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:57:32.074803    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:57:32.169893    3911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:57:32.188393    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:57:32.188465    3911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:57:32.206602    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:57:32.223241    3911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:57:32.241086    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:57:32.252378    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:57:32.263494    3911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 03:57:32.285713    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:57:32.296269    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:57:32.311089    3911 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:57:32.314143    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:57:32.321424    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:57:32.335207    3911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:57:32.429597    3911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:57:32.542464    3911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:57:32.542490    3911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:57:32.557136    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:57:32.660326    3911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:58:33.699453    3911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.040752064s)
	I0722 03:58:33.699525    3911 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0722 03:58:33.734950    3911 out.go:177] 
	W0722 03:58:33.756536    3911 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 10:57:29 ha-090000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.446112727Z" level=info msg="Starting up"
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.446594219Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.447194660Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.462050990Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476816092Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476858837Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476899215Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476909407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477031508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477068105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477176376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477210709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477222939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477230881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477351816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477553357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479128485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479167134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479271300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479304705Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479417021Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479458809Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481448117Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481494900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481508142Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481517623Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481527464Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481569984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481744950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481852966Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481872403Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481907193Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481919076Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481928860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481936657Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481955520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481967273Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481975440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481983423Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481991104Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482004822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482014286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482022158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482030329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482040470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482053851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482064290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482072410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482080983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482093264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482100888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482108346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482115856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482130159Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482146190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482154580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482161596Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482209554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482243396Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482253257Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482261382Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482267623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482276094Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482285841Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482429840Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482484213Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482510048Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482541660Z" level=info msg="containerd successfully booted in 0.021090s"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.467405362Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.479322696Z" level=info msg="Loading containers: start."
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.599220957Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.665815288Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.771955379Z" level=warning msg="error locating sandbox id 023e4273edcd40723038879300e7321a9aec3901cb772dbfe3c38850836b1315: sandbox 023e4273edcd40723038879300e7321a9aec3901cb772dbfe3c38850836b1315 not found"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.772061725Z" level=info msg="Loading containers: done."
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.779357823Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.779511676Z" level=info msg="Daemon has completed initialization"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.801250223Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.801353911Z" level=info msg="API listen on [::]:2376"
	Jul 22 10:57:31 ha-090000-m04 systemd[1]: Started Docker Application Container Engine.
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.860896719Z" level=info msg="Processing signal 'terminated'"
	Jul 22 10:57:32 ha-090000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862255865Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862561859Z" level=info msg="Daemon shutdown complete"
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862690583Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862732129Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 10:57:33 ha-090000-m04 dockerd[1100]: time="2024-07-22T10:57:33.897261523Z" level=info msg="Starting up"
	Jul 22 10:58:33 ha-090000-m04 dockerd[1100]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 10:57:29 ha-090000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.446112727Z" level=info msg="Starting up"
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.446594219Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.447194660Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.462050990Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476816092Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476858837Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476899215Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476909407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477031508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477068105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477176376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477210709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477222939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477230881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477351816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477553357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479128485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479167134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479271300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479304705Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479417021Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479458809Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481448117Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481494900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481508142Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481517623Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481527464Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481569984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481744950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481852966Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481872403Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481907193Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481919076Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481928860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481936657Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481955520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481967273Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481975440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481983423Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481991104Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482004822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482014286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482022158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482030329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482040470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482053851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482064290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482072410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482080983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482093264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482100888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482108346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482115856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482130159Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482146190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482154580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482161596Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482209554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482243396Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482253257Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482261382Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482267623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482276094Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482285841Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482429840Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482484213Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482510048Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482541660Z" level=info msg="containerd successfully booted in 0.021090s"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.467405362Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.479322696Z" level=info msg="Loading containers: start."
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.599220957Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.665815288Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.771955379Z" level=warning msg="error locating sandbox id 023e4273edcd40723038879300e7321a9aec3901cb772dbfe3c38850836b1315: sandbox 023e4273edcd40723038879300e7321a9aec3901cb772dbfe3c38850836b1315 not found"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.772061725Z" level=info msg="Loading containers: done."
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.779357823Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.779511676Z" level=info msg="Daemon has completed initialization"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.801250223Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.801353911Z" level=info msg="API listen on [::]:2376"
	Jul 22 10:57:31 ha-090000-m04 systemd[1]: Started Docker Application Container Engine.
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.860896719Z" level=info msg="Processing signal 'terminated'"
	Jul 22 10:57:32 ha-090000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862255865Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862561859Z" level=info msg="Daemon shutdown complete"
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862690583Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862732129Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 10:57:33 ha-090000-m04 dockerd[1100]: time="2024-07-22T10:57:33.897261523Z" level=info msg="Starting up"
	Jul 22 10:58:33 ha-090000-m04 dockerd[1100]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0722 03:58:33.756659    3911 out.go:239] * 
	* 
	W0722 03:58:33.757887    3911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 03:58:33.836490    3911 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:562: failed to start cluster. args "out/minikube-darwin-amd64 start -p ha-090000 --wait=true -v=7 --alsologtostderr --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-090000 -n ha-090000
helpers_test.go:244: <<< TestMultiControlPlane/serial/RestartCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/RestartCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-090000 logs -n 25: (3.239957414s)
helpers_test.go:252: TestMultiControlPlane/serial/RestartCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-090000 cp ha-090000-m03:/home/docker/cp-test.txt                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04:/home/docker/cp-test_ha-090000-m03_ha-090000-m04.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n ha-090000-m04 sudo cat                                                                                      | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /home/docker/cp-test_ha-090000-m03_ha-090000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-090000 cp testdata/cp-test.txt                                                                                            | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3050769313/001/cp-test_ha-090000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000:/home/docker/cp-test_ha-090000-m04_ha-090000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n ha-090000 sudo cat                                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /home/docker/cp-test_ha-090000-m04_ha-090000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m02:/home/docker/cp-test_ha-090000-m04_ha-090000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n ha-090000-m02 sudo cat                                                                                      | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /home/docker/cp-test_ha-090000-m04_ha-090000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m03:/home/docker/cp-test_ha-090000-m04_ha-090000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n ha-090000-m03 sudo cat                                                                                      | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /home/docker/cp-test_ha-090000-m04_ha-090000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-090000 node stop m02 -v=7                                                                                                 | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:49 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-090000 node start m02 -v=7                                                                                                | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:49 PDT | 22 Jul 24 03:49 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-090000 -v=7                                                                                                       | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:49 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-090000 -v=7                                                                                                            | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:49 PDT | 22 Jul 24 03:50 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-090000 --wait=true -v=7                                                                                                | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:50 PDT | 22 Jul 24 03:54 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-090000                                                                                                            | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:54 PDT |                     |
	| node    | ha-090000 node delete m03 -v=7                                                                                               | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:54 PDT | 22 Jul 24 03:54 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-090000 stop -v=7                                                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:54 PDT | 22 Jul 24 03:55 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-090000 --wait=true                                                                                                     | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:55 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 03:55:14
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 03:55:14.001165    3911 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:55:14.001338    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:14.001344    3911 out.go:304] Setting ErrFile to fd 2...
	I0722 03:55:14.001348    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:14.001524    3911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 03:55:14.002913    3911 out.go:298] Setting JSON to false
	I0722 03:55:14.025317    3911 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3283,"bootTime":1721642431,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 03:55:14.025414    3911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:55:14.048097    3911 out.go:177] * [ha-090000] minikube v1.33.1 on Darwin 14.5
	I0722 03:55:14.089944    3911 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 03:55:14.089999    3911 notify.go:220] Checking for updates...
	I0722 03:55:14.132553    3911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:14.153953    3911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 03:55:14.177091    3911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:55:14.197830    3911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	I0722 03:55:14.219112    3911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 03:55:14.240693    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:14.241352    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.241433    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:14.250957    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51948
	I0722 03:55:14.251322    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:14.251741    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:14.251758    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:14.252024    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:14.252166    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.252364    3911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:55:14.252613    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.252647    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:14.260865    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51950
	I0722 03:55:14.261199    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:14.261501    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:14.261519    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:14.261723    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:14.261829    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.289710    3911 out.go:177] * Using the hyperkit driver based on existing profile
	I0722 03:55:14.331989    3911 start.go:297] selected driver: hyperkit
	I0722 03:55:14.332015    3911 start.go:901] validating driver "hyperkit" against &{Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:55:14.332262    3911 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 03:55:14.332464    3911 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:55:14.332656    3911 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19313-1111/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0722 03:55:14.342163    3911 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0722 03:55:14.345875    3911 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.345899    3911 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0722 03:55:14.348432    3911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 03:55:14.348467    3911 cni.go:84] Creating CNI manager for ""
	I0722 03:55:14.348473    3911 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 03:55:14.348551    3911 start.go:340] cluster config:
	{Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:55:14.348667    3911 iso.go:125] acquiring lock: {Name:mk28fa3b914b659bb36b0449a0ad3ab1345dae32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:55:14.390735    3911 out.go:177] * Starting "ha-090000" primary control-plane node in "ha-090000" cluster
	I0722 03:55:14.412034    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:14.412101    3911 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 03:55:14.412134    3911 cache.go:56] Caching tarball of preloaded images
	I0722 03:55:14.412332    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 03:55:14.412374    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:55:14.412547    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:14.413322    3911 start.go:360] acquireMachinesLock for ha-090000: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:55:14.413444    3911 start.go:364] duration metric: took 104.878µs to acquireMachinesLock for "ha-090000"
	I0722 03:55:14.413466    3911 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:55:14.413480    3911 fix.go:54] fixHost starting: 
	I0722 03:55:14.413779    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.413805    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:14.422850    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51952
	I0722 03:55:14.423211    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:14.423607    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:14.423626    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:14.423868    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:14.424010    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.424163    3911 main.go:141] libmachine: (ha-090000) Calling .GetState
	I0722 03:55:14.424269    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:14.424340    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3743
	I0722 03:55:14.425373    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid 3743 missing from process table
	I0722 03:55:14.425407    3911 fix.go:112] recreateIfNeeded on ha-090000: state=Stopped err=<nil>
	I0722 03:55:14.425425    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	W0722 03:55:14.425550    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:55:14.467917    3911 out.go:177] * Restarting existing hyperkit VM for "ha-090000" ...
	I0722 03:55:14.490898    3911 main.go:141] libmachine: (ha-090000) Calling .Start
	I0722 03:55:14.491161    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:14.491206    3911 main.go:141] libmachine: (ha-090000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid
	I0722 03:55:14.492929    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid 3743 missing from process table
	I0722 03:55:14.492946    3911 main.go:141] libmachine: (ha-090000) DBG | pid 3743 is in state "Stopped"
	I0722 03:55:14.492978    3911 main.go:141] libmachine: (ha-090000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid...
	I0722 03:55:14.493148    3911 main.go:141] libmachine: (ha-090000) DBG | Using UUID 865eb55d-4879-4f09-8c93-9ca2b7f6f541
	I0722 03:55:14.657956    3911 main.go:141] libmachine: (ha-090000) DBG | Generated MAC de:e:68:47:cf:44
	I0722 03:55:14.657983    3911 main.go:141] libmachine: (ha-090000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000
	I0722 03:55:14.658095    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"865eb55d-4879-4f09-8c93-9ca2b7f6f541", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:14.658125    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"865eb55d-4879-4f09-8c93-9ca2b7f6f541", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:14.658167    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "865eb55d-4879-4f09-8c93-9ca2b7f6f541", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/ha-090000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"}
	I0722 03:55:14.658258    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 865eb55d-4879-4f09-8c93-9ca2b7f6f541 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/ha-090000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"
	I0722 03:55:14.658283    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 03:55:14.659556    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Pid is 3926
	I0722 03:55:14.659971    3911 main.go:141] libmachine: (ha-090000) DBG | Attempt 0
	I0722 03:55:14.659983    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:14.660096    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3926
	I0722 03:55:14.661907    3911 main.go:141] libmachine: (ha-090000) DBG | Searching for de:e:68:47:cf:44 in /var/db/dhcpd_leases ...
	I0722 03:55:14.661965    3911 main.go:141] libmachine: (ha-090000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0722 03:55:14.661986    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 03:55:14.662001    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8bc8}
	I0722 03:55:14.662012    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8b16}
	I0722 03:55:14.662031    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8aec}
	I0722 03:55:14.662047    3911 main.go:141] libmachine: (ha-090000) DBG | Found match: de:e:68:47:cf:44
	I0722 03:55:14.662058    3911 main.go:141] libmachine: (ha-090000) DBG | IP: 192.169.0.5
	I0722 03:55:14.662088    3911 main.go:141] libmachine: (ha-090000) Calling .GetConfigRaw
	I0722 03:55:14.662970    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:14.663190    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:14.663619    3911 machine.go:94] provisionDockerMachine start ...
	I0722 03:55:14.663631    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.663775    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:14.663892    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:14.663995    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:14.664107    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:14.664217    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:14.664369    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:14.664624    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:14.664637    3911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:55:14.668018    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 03:55:14.726271    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 03:55:14.726986    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:14.727016    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:14.727030    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:14.727041    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:15.102308    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 03:55:15.102323    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 03:55:15.217057    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:15.217079    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:15.217092    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:15.217103    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:15.217955    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 03:55:15.217966    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 03:55:20.486836    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 03:55:20.486863    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 03:55:20.486878    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 03:55:20.511003    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 03:55:49.725974    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 03:55:49.725988    3911 main.go:141] libmachine: (ha-090000) Calling .GetMachineName
	I0722 03:55:49.726125    3911 buildroot.go:166] provisioning hostname "ha-090000"
	I0722 03:55:49.726138    3911 main.go:141] libmachine: (ha-090000) Calling .GetMachineName
	I0722 03:55:49.726243    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.726335    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:49.726420    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.726506    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.726616    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:49.726741    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:49.726890    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:49.726899    3911 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-090000 && echo "ha-090000" | sudo tee /etc/hostname
	I0722 03:55:49.789306    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-090000
	
	I0722 03:55:49.789328    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.789466    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:49.789581    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.789678    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.789776    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:49.789915    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:49.790061    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:49.790072    3911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-090000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-090000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-090000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:55:49.849551    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:55:49.849576    3911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 03:55:49.849589    3911 buildroot.go:174] setting up certificates
	I0722 03:55:49.849598    3911 provision.go:84] configureAuth start
	I0722 03:55:49.849606    3911 main.go:141] libmachine: (ha-090000) Calling .GetMachineName
	I0722 03:55:49.849736    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:49.849829    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.849906    3911 provision.go:143] copyHostCerts
	I0722 03:55:49.849941    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:55:49.850010    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 03:55:49.850019    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:55:49.850190    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 03:55:49.850418    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:55:49.850458    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 03:55:49.850463    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:55:49.850553    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 03:55:49.850707    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:55:49.850746    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 03:55:49.850751    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:55:49.850838    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 03:55:49.850994    3911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.ha-090000 san=[127.0.0.1 192.169.0.5 ha-090000 localhost minikube]
	I0722 03:55:49.954745    3911 provision.go:177] copyRemoteCerts
	I0722 03:55:49.954797    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:55:49.954814    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.954945    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:49.955036    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.955138    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:49.955226    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:49.988017    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 03:55:49.988090    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:55:50.006955    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 03:55:50.007018    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0722 03:55:50.026488    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 03:55:50.026558    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 03:55:50.045921    3911 provision.go:87] duration metric: took 196.3146ms to configureAuth
	I0722 03:55:50.045933    3911 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:55:50.046087    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:50.046101    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:50.046225    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:50.046308    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:50.046401    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.046493    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.046569    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:50.046685    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:50.046803    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:50.046811    3911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:55:50.100376    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:55:50.100387    3911 buildroot.go:70] root file system type: tmpfs
	I0722 03:55:50.100457    3911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:55:50.100468    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:50.100595    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:50.100692    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.100789    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.100888    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:50.101021    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:50.101173    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:50.101220    3911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:55:50.162706    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:55:50.162761    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:50.162891    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:50.162997    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.163099    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.163182    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:50.163329    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:50.163465    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:50.163477    3911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:55:51.839255    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 03:55:51.839270    3911 machine.go:97] duration metric: took 37.176641879s to provisionDockerMachine
	I0722 03:55:51.839283    3911 start.go:293] postStartSetup for "ha-090000" (driver="hyperkit")
	I0722 03:55:51.839300    3911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:55:51.839314    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:51.839490    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:55:51.839510    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:51.839611    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:51.839703    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.839796    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:51.839928    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:51.873857    3911 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:55:51.877062    3911 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:55:51.877075    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 03:55:51.877182    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 03:55:51.877378    3911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 03:55:51.877384    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /etc/ssl/certs/16372.pem
	I0722 03:55:51.877594    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 03:55:51.885692    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:55:51.904673    3911 start.go:296] duration metric: took 65.382263ms for postStartSetup
	I0722 03:55:51.904692    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:51.904859    3911 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 03:55:51.904872    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:51.904961    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:51.905042    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.905118    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:51.905210    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:51.938400    3911 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0722 03:55:51.938461    3911 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0722 03:55:51.992039    3911 fix.go:56] duration metric: took 37.579572847s for fixHost
	I0722 03:55:51.992063    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:51.992208    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:51.992304    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.992398    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.992482    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:51.992602    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:51.992763    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:51.992770    3911 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 03:55:52.046381    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645751.936433832
	
	I0722 03:55:52.046393    3911 fix.go:216] guest clock: 1721645751.936433832
	I0722 03:55:52.046398    3911 fix.go:229] Guest: 2024-07-22 03:55:51.936433832 -0700 PDT Remote: 2024-07-22 03:55:51.992052 -0700 PDT m=+38.026686282 (delta=-55.618168ms)
	I0722 03:55:52.046416    3911 fix.go:200] guest clock delta is within tolerance: -55.618168ms
	I0722 03:55:52.046421    3911 start.go:83] releasing machines lock for "ha-090000", held for 37.633981911s
	I0722 03:55:52.046442    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.046575    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:52.046677    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.046990    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.047122    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.047226    3911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:55:52.047248    3911 ssh_runner.go:195] Run: cat /version.json
	I0722 03:55:52.047259    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:52.047259    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:52.047380    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:52.047396    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:52.047483    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:52.047511    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:52.047561    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:52.047626    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:52.047654    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:52.047720    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:52.075429    3911 ssh_runner.go:195] Run: systemctl --version
	I0722 03:55:52.079894    3911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 03:55:52.124828    3911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:55:52.124898    3911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:55:52.137859    3911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 03:55:52.137870    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:55:52.137970    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:55:52.155379    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:55:52.164198    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:55:52.173115    3911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:55:52.173156    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:55:52.182074    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:55:52.190972    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:55:52.199765    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:55:52.208507    3911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:55:52.217591    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:55:52.226424    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:55:52.235243    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:55:52.244124    3911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:55:52.252099    3911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:55:52.259973    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:52.354629    3911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:55:52.373701    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:55:52.373781    3911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:55:52.386226    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:55:52.407006    3911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:55:52.422442    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:55:52.433467    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:55:52.445302    3911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 03:55:52.465665    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:55:52.477795    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:55:52.493683    3911 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:55:52.496631    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:55:52.503860    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:55:52.517344    3911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:55:52.615407    3911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:55:52.719878    3911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:55:52.719955    3911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:55:52.735170    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:52.840992    3911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:55:55.172776    3911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.331829238s)
	I0722 03:55:55.172846    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 03:55:55.183162    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:55:55.193307    3911 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 03:55:55.284550    3911 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 03:55:55.395161    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:55.503613    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 03:55:55.517310    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:55:55.528594    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:55.620227    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 03:55:55.685036    3911 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 03:55:55.685111    3911 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 03:55:55.689532    3911 start.go:563] Will wait 60s for crictl version
	I0722 03:55:55.689580    3911 ssh_runner.go:195] Run: which crictl
	I0722 03:55:55.692688    3911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 03:55:55.719714    3911 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 03:55:55.719788    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:55:55.737225    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:55:55.780302    3911 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 03:55:55.780349    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:55.780734    3911 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0722 03:55:55.785388    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:55:55.796137    3911 kubeadm.go:883] updating cluster {Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 03:55:55.796229    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:55.796288    3911 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 03:55:55.808589    3911 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0722 03:55:55.808605    3911 docker.go:615] Images already preloaded, skipping extraction
	I0722 03:55:55.808686    3911 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 03:55:55.823528    3911 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0722 03:55:55.823552    3911 cache_images.go:84] Images are preloaded, skipping loading
	I0722 03:55:55.823561    3911 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.3 docker true true} ...
	I0722 03:55:55.823650    3911 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-090000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 03:55:55.823715    3911 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 03:55:55.843770    3911 cni.go:84] Creating CNI manager for ""
	I0722 03:55:55.843782    3911 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 03:55:55.843795    3911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 03:55:55.843811    3911 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-090000 NodeName:ha-090000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 03:55:55.843918    3911 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-090000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 03:55:55.843949    3911 kube-vip.go:115] generating kube-vip config ...
	I0722 03:55:55.843997    3911 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 03:55:55.858984    3911 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 03:55:55.859051    3911 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 03:55:55.859099    3911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 03:55:55.871541    3911 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 03:55:55.871605    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0722 03:55:55.879901    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0722 03:55:55.893317    3911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 03:55:55.906860    3911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0722 03:55:55.920583    3911 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0722 03:55:55.934115    3911 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0722 03:55:55.937202    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:55:55.947512    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:56.043601    3911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:55:56.058460    3911 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000 for IP: 192.169.0.5
	I0722 03:55:56.058473    3911 certs.go:194] generating shared ca certs ...
	I0722 03:55:56.058482    3911 certs.go:226] acquiring lock for ca certs: {Name:mk31b6ba3ba4e51acc59db740baf7c8ba8dd988b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.058655    3911 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key
	I0722 03:55:56.058735    3911 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key
	I0722 03:55:56.058744    3911 certs.go:256] generating profile certs ...
	I0722 03:55:56.058828    3911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key
	I0722 03:55:56.058850    3911 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603
	I0722 03:55:56.058866    3911 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0722 03:55:56.176369    3911 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603 ...
	I0722 03:55:56.176387    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603: {Name:mk56ec66ac2a3d80a126aae24a23c208f41c56a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.176780    3911 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603 ...
	I0722 03:55:56.176790    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603: {Name:mk0da3ff1ed021cd0c62e370f79895aeed00bfd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.177042    3911 certs.go:381] copying /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt
	I0722 03:55:56.177289    3911 certs.go:385] copying /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key
	I0722 03:55:56.177558    3911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key
	I0722 03:55:56.177573    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 03:55:56.177599    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 03:55:56.177621    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 03:55:56.177643    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 03:55:56.177663    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 03:55:56.177684    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 03:55:56.177705    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 03:55:56.177727    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 03:55:56.177832    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem (1338 bytes)
	W0722 03:55:56.177883    3911 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637_empty.pem, impossibly tiny 0 bytes
	I0722 03:55:56.177892    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 03:55:56.177935    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem (1078 bytes)
	I0722 03:55:56.177980    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem (1123 bytes)
	I0722 03:55:56.178009    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem (1675 bytes)
	I0722 03:55:56.178085    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:55:56.178123    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem -> /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.178148    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.178168    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.178610    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 03:55:56.201771    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 03:55:56.234700    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 03:55:56.277028    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 03:55:56.303799    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 03:55:56.355626    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 03:55:56.423367    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 03:55:56.460516    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 03:55:56.495805    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem --> /usr/share/ca-certificates/1637.pem (1338 bytes)
	I0722 03:55:56.523902    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /usr/share/ca-certificates/16372.pem (1708 bytes)
	I0722 03:55:56.561999    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 03:55:56.592542    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 03:55:56.609376    3911 ssh_runner.go:195] Run: openssl version
	I0722 03:55:56.613622    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1637.pem && ln -fs /usr/share/ca-certificates/1637.pem /etc/ssl/certs/1637.pem"
	I0722 03:55:56.622123    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.625637    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:38 /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.625671    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.629816    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1637.pem /etc/ssl/certs/51391683.0"
	I0722 03:55:56.638362    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16372.pem && ln -fs /usr/share/ca-certificates/16372.pem /etc/ssl/certs/16372.pem"
	I0722 03:55:56.646609    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.650063    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:38 /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.650097    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.654257    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16372.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 03:55:56.662670    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 03:55:56.671261    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.674720    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.674754    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.678972    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 03:55:56.687498    3911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 03:55:56.691047    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 03:55:56.695322    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 03:55:56.699702    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 03:55:56.704065    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 03:55:56.708401    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 03:55:56.712852    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 03:55:56.717112    3911 kubeadm.go:392] StartCluster: {Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:55:56.717233    3911 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 03:55:56.730051    3911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 03:55:56.737806    3911 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 03:55:56.737821    3911 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 03:55:56.737861    3911 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 03:55:56.745356    3911 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 03:55:56.745651    3911 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-090000" does not appear in /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:56.745730    3911 kubeconfig.go:62] /Users/jenkins/minikube-integration/19313-1111/kubeconfig needs updating (will repair): [kubeconfig missing "ha-090000" cluster setting kubeconfig missing "ha-090000" context setting]
	I0722 03:55:56.745922    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/kubeconfig: {Name:mkf2b240918cd66dabf425a67d7df0a0c9aa8c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.746572    3911 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:56.746765    3911 kapi.go:59] client config for ha-090000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xc727ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 03:55:56.747076    3911 cert_rotation.go:137] Starting client certificate rotation controller
	I0722 03:55:56.747254    3911 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 03:55:56.754607    3911 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0722 03:55:56.754620    3911 kubeadm.go:597] duration metric: took 16.795414ms to restartPrimaryControlPlane
	I0722 03:55:56.754625    3911 kubeadm.go:394] duration metric: took 37.520322ms to StartCluster
	I0722 03:55:56.754634    3911 settings.go:142] acquiring lock: {Name:mk61cf5b2a74edb35dda57ecbe8abc2ea6c58c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.754711    3911 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:56.755134    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/kubeconfig: {Name:mkf2b240918cd66dabf425a67d7df0a0c9aa8c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.755360    3911 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 03:55:56.755373    3911 start.go:241] waiting for startup goroutines ...
	I0722 03:55:56.755387    3911 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 03:55:56.755497    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:56.799163    3911 out.go:177] * Enabled addons: 
	I0722 03:55:56.820182    3911 addons.go:510] duration metric: took 64.792244ms for enable addons: enabled=[]
	I0722 03:55:56.820230    3911 start.go:246] waiting for cluster config update ...
	I0722 03:55:56.820244    3911 start.go:255] writing updated cluster config ...
	I0722 03:55:56.842189    3911 out.go:177] 
	I0722 03:55:56.863789    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:56.863918    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:56.886431    3911 out.go:177] * Starting "ha-090000-m02" control-plane node in "ha-090000" cluster
	I0722 03:55:56.928353    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:56.928403    3911 cache.go:56] Caching tarball of preloaded images
	I0722 03:55:56.928581    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 03:55:56.928604    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:55:56.928730    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:56.929636    3911 start.go:360] acquireMachinesLock for ha-090000-m02: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:55:56.929748    3911 start.go:364] duration metric: took 80.846µs to acquireMachinesLock for "ha-090000-m02"
	I0722 03:55:56.929773    3911 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:55:56.929782    3911 fix.go:54] fixHost starting: m02
	I0722 03:55:56.930190    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:56.930213    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:56.939208    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51974
	I0722 03:55:56.939548    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:56.939878    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:56.939889    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:56.940129    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:56.940269    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:55:56.940364    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetState
	I0722 03:55:56.940445    3911 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:56.940553    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid from json: 3753
	I0722 03:55:56.941410    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid 3753 missing from process table
	I0722 03:55:56.941430    3911 fix.go:112] recreateIfNeeded on ha-090000-m02: state=Stopped err=<nil>
	I0722 03:55:56.941439    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	W0722 03:55:56.941520    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:55:56.963252    3911 out.go:177] * Restarting existing hyperkit VM for "ha-090000-m02" ...
	I0722 03:55:56.984572    3911 main.go:141] libmachine: (ha-090000-m02) Calling .Start
	I0722 03:55:56.984884    3911 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:56.984972    3911 main.go:141] libmachine: (ha-090000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid
	I0722 03:55:56.986700    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid 3753 missing from process table
	I0722 03:55:56.986715    3911 main.go:141] libmachine: (ha-090000-m02) DBG | pid 3753 is in state "Stopped"
	I0722 03:55:56.986731    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid...
	I0722 03:55:56.987014    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Using UUID a238bb05-e07d-4298-98be-9d336c163b01
	I0722 03:55:57.014110    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Generated MAC 4e:65:fa:f9:26:3
	I0722 03:55:57.014143    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000
	I0722 03:55:57.014261    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a238bb05-e07d-4298-98be-9d336c163b01", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:57.014289    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a238bb05-e07d-4298-98be-9d336c163b01", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:57.014330    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a238bb05-e07d-4298-98be-9d336c163b01", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/ha-090000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machine
s/ha-090000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"}
	I0722 03:55:57.014365    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a238bb05-e07d-4298-98be-9d336c163b01 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/ha-090000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"
	I0722 03:55:57.014400    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 03:55:57.015680    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Pid is 3958
	I0722 03:55:57.016180    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Attempt 0
	I0722 03:55:57.016197    3911 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:57.016259    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid from json: 3958
	I0722 03:55:57.018025    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Searching for 4e:65:fa:f9:26:3 in /var/db/dhcpd_leases ...
	I0722 03:55:57.018041    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0722 03:55:57.018086    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 03:55:57.018095    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 03:55:57.018102    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8bc8}
	I0722 03:55:57.018112    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8b16}
	I0722 03:55:57.018118    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Found match: 4e:65:fa:f9:26:3
	I0722 03:55:57.018122    3911 main.go:141] libmachine: (ha-090000-m02) DBG | IP: 192.169.0.6
	I0722 03:55:57.018178    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetConfigRaw
	I0722 03:55:57.018834    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:55:57.019009    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:57.019499    3911 machine.go:94] provisionDockerMachine start ...
	I0722 03:55:57.019509    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:55:57.019651    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:55:57.019770    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:55:57.019892    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:55:57.020010    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:55:57.020098    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:55:57.020264    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:57.020422    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:55:57.020435    3911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:55:57.023607    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 03:55:57.031862    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 03:55:57.032835    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:57.032848    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:57.032855    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:57.032861    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:57.411442    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 03:55:57.411461    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 03:55:57.526363    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:57.526382    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:57.526390    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:57.526396    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:57.527265    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 03:55:57.527278    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 03:56:02.785857    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 03:56:02.785940    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 03:56:02.785949    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 03:56:02.812798    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 03:56:32.075580    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 03:56:32.075594    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetMachineName
	I0722 03:56:32.075720    3911 buildroot.go:166] provisioning hostname "ha-090000-m02"
	I0722 03:56:32.075731    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetMachineName
	I0722 03:56:32.075826    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.075933    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.076015    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.076119    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.076212    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.076341    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.076492    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.076502    3911 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-090000-m02 && echo "ha-090000-m02" | sudo tee /etc/hostname
	I0722 03:56:32.136897    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-090000-m02
	
	I0722 03:56:32.136912    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.137046    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.137157    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.137250    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.137341    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.137474    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.137607    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.137618    3911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-090000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-090000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-090000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:56:32.192449    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:56:32.192463    3911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 03:56:32.192472    3911 buildroot.go:174] setting up certificates
	I0722 03:56:32.192482    3911 provision.go:84] configureAuth start
	I0722 03:56:32.192492    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetMachineName
	I0722 03:56:32.192621    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:56:32.192721    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.192798    3911 provision.go:143] copyHostCerts
	I0722 03:56:32.192826    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:56:32.192874    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 03:56:32.192879    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:56:32.193015    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 03:56:32.193230    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:56:32.193264    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 03:56:32.193269    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:56:32.193346    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 03:56:32.193513    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:56:32.193541    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 03:56:32.193546    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:56:32.193618    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 03:56:32.193767    3911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.ha-090000-m02 san=[127.0.0.1 192.169.0.6 ha-090000-m02 localhost minikube]
	I0722 03:56:32.314909    3911 provision.go:177] copyRemoteCerts
	I0722 03:56:32.314954    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:56:32.314968    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.315107    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.315208    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.315309    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.315384    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:32.347809    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 03:56:32.347885    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 03:56:32.366931    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 03:56:32.366988    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:56:32.386030    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 03:56:32.386103    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 03:56:32.404971    3911 provision.go:87] duration metric: took 212.48697ms to configureAuth
	I0722 03:56:32.404983    3911 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:56:32.405138    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:32.405152    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:32.405288    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.405375    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.405462    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.405546    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.405633    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.405741    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.405866    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.405874    3911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:56:32.454313    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:56:32.454324    3911 buildroot.go:70] root file system type: tmpfs
	I0722 03:56:32.454404    3911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:56:32.454417    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.454548    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.454656    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.454765    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.454869    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.454989    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.455128    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.455173    3911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:56:32.513991    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:56:32.514007    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.514163    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.514257    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.514355    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.514458    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.514588    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.514721    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.514733    3911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:56:34.211339    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 03:56:34.211353    3911 machine.go:97] duration metric: took 37.192847433s to provisionDockerMachine
	I0722 03:56:34.211364    3911 start.go:293] postStartSetup for "ha-090000-m02" (driver="hyperkit")
	I0722 03:56:34.211371    3911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:56:34.211386    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.211563    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:56:34.211577    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.211687    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.211786    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.211882    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.211969    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:34.242978    3911 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:56:34.245962    3911 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:56:34.245971    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 03:56:34.246060    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 03:56:34.246200    3911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 03:56:34.246206    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /etc/ssl/certs/16372.pem
	I0722 03:56:34.246360    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 03:56:34.254372    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:56:34.273009    3911 start.go:296] duration metric: took 61.631077ms for postStartSetup
	I0722 03:56:34.273028    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.273172    3911 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 03:56:34.273182    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.273265    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.273351    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.273439    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.273519    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:34.305174    3911 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0722 03:56:34.305226    3911 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0722 03:56:34.339922    3911 fix.go:56] duration metric: took 37.411144035s for fixHost
	I0722 03:56:34.339947    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.340082    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.340179    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.340258    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.340343    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.340478    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:34.340622    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:34.340630    3911 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 03:56:34.388578    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645794.572489059
	
	I0722 03:56:34.388591    3911 fix.go:216] guest clock: 1721645794.572489059
	I0722 03:56:34.388596    3911 fix.go:229] Guest: 2024-07-22 03:56:34.572489059 -0700 PDT Remote: 2024-07-22 03:56:34.339936 -0700 PDT m=+80.375710715 (delta=232.553059ms)
	I0722 03:56:34.388606    3911 fix.go:200] guest clock delta is within tolerance: 232.553059ms
	I0722 03:56:34.388609    3911 start.go:83] releasing machines lock for "ha-090000-m02", held for 37.459858552s
	I0722 03:56:34.388627    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.388762    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:56:34.409792    3911 out.go:177] * Found network options:
	I0722 03:56:34.430136    3911 out.go:177]   - NO_PROXY=192.169.0.5
	W0722 03:56:34.451143    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:56:34.451179    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.452017    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.452288    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.452418    3911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:56:34.452457    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	W0722 03:56:34.452511    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:56:34.452619    3911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 03:56:34.452639    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.452667    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.452899    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.452939    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.453127    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.453158    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.453309    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.453305    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:34.453445    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	W0722 03:56:34.481920    3911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:56:34.481981    3911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:56:34.527590    3911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 03:56:34.527602    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:56:34.527664    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:56:34.542920    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:56:34.551387    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:56:34.559553    3911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:56:34.559598    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:56:34.567825    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:56:34.576145    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:56:34.584472    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:56:34.592914    3911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:56:34.601360    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:56:34.609666    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:56:34.618581    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:56:34.626849    3911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:56:34.634297    3911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:56:34.642011    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:34.733806    3911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:56:34.753393    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:56:34.753463    3911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:56:34.769228    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:56:34.781756    3911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:56:34.797930    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:56:34.808316    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:56:34.818407    3911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 03:56:34.839910    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:56:34.852187    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:56:34.867777    3911 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:56:34.870845    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:56:34.878342    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:56:34.891766    3911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:56:34.986612    3911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:56:35.092574    3911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:56:35.092596    3911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:56:35.106385    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:35.202045    3911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:56:37.547949    3911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.345948624s)
	I0722 03:56:37.548007    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 03:56:37.559709    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:56:37.570592    3911 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 03:56:37.669571    3911 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 03:56:37.763201    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:37.875925    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 03:56:37.889982    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:56:37.900245    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:38.003656    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 03:56:38.067963    3911 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 03:56:38.068036    3911 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 03:56:38.072622    3911 start.go:563] Will wait 60s for crictl version
	I0722 03:56:38.072673    3911 ssh_runner.go:195] Run: which crictl
	I0722 03:56:38.075745    3911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 03:56:38.103382    3911 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 03:56:38.103467    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:56:38.119903    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:56:38.160816    3911 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 03:56:38.182482    3911 out.go:177]   - env NO_PROXY=192.169.0.5
	I0722 03:56:38.203478    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:56:38.203850    3911 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0722 03:56:38.207987    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:56:38.217642    3911 mustload.go:65] Loading cluster: ha-090000
	I0722 03:56:38.217804    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:38.218020    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:56:38.218035    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:56:38.226637    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51996
	I0722 03:56:38.226983    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:56:38.227325    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:56:38.227343    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:56:38.227630    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:56:38.227748    3911 main.go:141] libmachine: (ha-090000) Calling .GetState
	I0722 03:56:38.227836    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:38.227899    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3926
	I0722 03:56:38.228835    3911 host.go:66] Checking if "ha-090000" exists ...
	I0722 03:56:38.229086    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:56:38.229101    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:56:38.237412    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51998
	I0722 03:56:38.237753    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:56:38.238100    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:56:38.238118    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:56:38.238328    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:56:38.238453    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:56:38.238565    3911 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000 for IP: 192.169.0.6
	I0722 03:56:38.238571    3911 certs.go:194] generating shared ca certs ...
	I0722 03:56:38.238580    3911 certs.go:226] acquiring lock for ca certs: {Name:mk31b6ba3ba4e51acc59db740baf7c8ba8dd988b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:56:38.238710    3911 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key
	I0722 03:56:38.238765    3911 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key
	I0722 03:56:38.238773    3911 certs.go:256] generating profile certs ...
	I0722 03:56:38.238865    3911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key
	I0722 03:56:38.238954    3911 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.cd5997a2
	I0722 03:56:38.239013    3911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key
	I0722 03:56:38.239026    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 03:56:38.239049    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 03:56:38.239069    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 03:56:38.239087    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 03:56:38.239104    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 03:56:38.239123    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 03:56:38.239143    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 03:56:38.239166    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 03:56:38.239250    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem (1338 bytes)
	W0722 03:56:38.239289    3911 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637_empty.pem, impossibly tiny 0 bytes
	I0722 03:56:38.239297    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 03:56:38.239330    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem (1078 bytes)
	I0722 03:56:38.239361    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem (1123 bytes)
	I0722 03:56:38.239392    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem (1675 bytes)
	I0722 03:56:38.239457    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:56:38.239492    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem -> /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.239513    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.239532    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.239558    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:56:38.239660    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:56:38.239755    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:56:38.239850    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:56:38.239942    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:56:38.265993    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 03:56:38.269678    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 03:56:38.278304    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 03:56:38.281402    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0722 03:56:38.289616    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 03:56:38.292667    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 03:56:38.300512    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 03:56:38.303570    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0722 03:56:38.311600    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 03:56:38.314768    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 03:56:38.322792    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 03:56:38.325989    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0722 03:56:38.334090    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 03:56:38.354251    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 03:56:38.373942    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 03:56:38.393826    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 03:56:38.413300    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 03:56:38.433234    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 03:56:38.452691    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 03:56:38.472206    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 03:56:38.492624    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem --> /usr/share/ca-certificates/1637.pem (1338 bytes)
	I0722 03:56:38.511779    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /usr/share/ca-certificates/16372.pem (1708 bytes)
	I0722 03:56:38.531604    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 03:56:38.550960    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 03:56:38.564536    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0722 03:56:38.577906    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 03:56:38.591620    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0722 03:56:38.605203    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 03:56:38.619039    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0722 03:56:38.633179    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 03:56:38.646763    3911 ssh_runner.go:195] Run: openssl version
	I0722 03:56:38.650909    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16372.pem && ln -fs /usr/share/ca-certificates/16372.pem /etc/ssl/certs/16372.pem"
	I0722 03:56:38.659202    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.662546    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:38 /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.662579    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.666667    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16372.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 03:56:38.675008    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 03:56:38.683335    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.686876    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.686923    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.691071    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 03:56:38.699373    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1637.pem && ln -fs /usr/share/ca-certificates/1637.pem /etc/ssl/certs/1637.pem"
	I0722 03:56:38.707510    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.710890    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:38 /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.710923    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.715062    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1637.pem /etc/ssl/certs/51391683.0"
	I0722 03:56:38.723255    3911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 03:56:38.726701    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 03:56:38.730990    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 03:56:38.735283    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 03:56:38.739568    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 03:56:38.743725    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 03:56:38.747941    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 03:56:38.752113    3911 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.3 docker true true} ...
	I0722 03:56:38.752169    3911 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-090000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 03:56:38.752183    3911 kube-vip.go:115] generating kube-vip config ...
	I0722 03:56:38.752213    3911 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 03:56:38.764297    3911 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 03:56:38.764339    3911 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 03:56:38.764386    3911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 03:56:38.777566    3911 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 03:56:38.777617    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 03:56:38.785844    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0722 03:56:38.799378    3911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 03:56:38.812569    3911 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0722 03:56:38.826035    3911 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0722 03:56:38.829004    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:56:38.838894    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:38.934878    3911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:56:38.949889    3911 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 03:56:38.950085    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:38.971273    3911 out.go:177] * Verifying Kubernetes components...
	I0722 03:56:38.991992    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:39.123554    3911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:56:39.136167    3911 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:56:39.136377    3911 kapi.go:59] client config for ha-090000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xc727ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 03:56:39.136421    3911 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0722 03:56:39.136590    3911 node_ready.go:35] waiting up to 6m0s for node "ha-090000-m02" to be "Ready" ...
	I0722 03:56:39.136660    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:39.136665    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:39.136672    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:39.136677    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:40.137255    3911 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0722 03:56:40.137479    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:40.137503    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:40.137521    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:40.137534    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:47.940026    3911 round_trippers.go:574] Response Status: 200 OK in 7802 milliseconds
	I0722 03:56:47.940733    3911 node_ready.go:49] node "ha-090000-m02" has status "Ready":"True"
	I0722 03:56:47.940746    3911 node_ready.go:38] duration metric: took 8.804377648s for node "ha-090000-m02" to be "Ready" ...
	I0722 03:56:47.940753    3911 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:56:47.940808    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:47.940815    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:47.940823    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:47.940827    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.019911    3911 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I0722 03:56:48.026784    3911 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lf5mv" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.026849    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lf5mv
	I0722 03:56:48.026855    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.026862    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.026866    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.031605    3911 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 03:56:48.032135    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.032143    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.032150    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.032153    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.034575    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.034884    3911 pod_ready.go:92] pod "coredns-7db6d8ff4d-lf5mv" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.034894    3911 pod_ready.go:81] duration metric: took 8.095254ms for pod "coredns-7db6d8ff4d-lf5mv" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.034902    3911 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mjc97" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.034940    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mjc97
	I0722 03:56:48.034951    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.034959    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.034963    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.037811    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.038390    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.038397    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.038403    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.038412    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.042255    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:48.042713    3911 pod_ready.go:92] pod "coredns-7db6d8ff4d-mjc97" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.042723    3911 pod_ready.go:81] duration metric: took 7.815334ms for pod "coredns-7db6d8ff4d-mjc97" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.042730    3911 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.042769    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-090000
	I0722 03:56:48.042774    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.042780    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.042784    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.046998    3911 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 03:56:48.047505    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.047512    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.047517    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.047519    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.050594    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:48.051034    3911 pod_ready.go:92] pod "etcd-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.051045    3911 pod_ready.go:81] duration metric: took 8.309873ms for pod "etcd-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.051052    3911 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.051096    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-090000-m02
	I0722 03:56:48.051102    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.051108    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.051112    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.055364    3911 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 03:56:48.055818    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:48.055827    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.055833    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.055837    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.058858    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:48.059331    3911 pod_ready.go:92] pod "etcd-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.059342    3911 pod_ready.go:81] duration metric: took 8.283096ms for pod "etcd-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.059349    3911 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.059399    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-090000-m03
	I0722 03:56:48.059405    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.059412    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.059415    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.069366    3911 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 03:56:48.140952    3911 request.go:629] Waited for 71.140962ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:48.140996    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:48.141001    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.141007    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.141013    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.150505    3911 round_trippers.go:574] Response Status: 404 Not Found in 9 milliseconds
	I0722 03:56:48.150672    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "etcd-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:48.150684    3911 pod_ready.go:81] duration metric: took 91.332094ms for pod "etcd-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:48.150693    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "etcd-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:48.150707    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.341259    3911 request.go:629] Waited for 190.473586ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000
	I0722 03:56:48.341296    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000
	I0722 03:56:48.341301    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.341307    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.341311    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.346534    3911 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 03:56:48.541247    3911 request.go:629] Waited for 194.341501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.541301    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.541310    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.541317    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.541321    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.543864    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.544294    3911 pod_ready.go:92] pod "kube-apiserver-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.544304    3911 pod_ready.go:81] duration metric: took 393.600781ms for pod "kube-apiserver-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.544310    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.740936    3911 request.go:629] Waited for 196.590173ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m02
	I0722 03:56:48.741009    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m02
	I0722 03:56:48.741017    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.741025    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.741032    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.743601    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.941584    3911 request.go:629] Waited for 197.554429ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:48.941670    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:48.941676    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.941681    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.941685    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.943442    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:48.943712    3911 pod_ready.go:92] pod "kube-apiserver-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.943722    3911 pod_ready.go:81] duration metric: took 399.417249ms for pod "kube-apiserver-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.943728    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.142238    3911 request.go:629] Waited for 198.455178ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m03
	I0722 03:56:49.142276    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m03
	I0722 03:56:49.142283    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.142291    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.142297    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.144759    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:49.341711    3911 request.go:629] Waited for 196.420201ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:49.341743    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:49.341748    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.341754    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.341757    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.343407    3911 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0722 03:56:49.343465    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:49.343477    3911 pod_ready.go:81] duration metric: took 399.754899ms for pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:49.343485    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:49.343492    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.540820    3911 request.go:629] Waited for 197.295627ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000
	I0722 03:56:49.540859    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000
	I0722 03:56:49.540864    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.540873    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.540889    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.542752    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:49.741810    3911 request.go:629] Waited for 198.496804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:49.741941    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:49.741953    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.741965    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.741971    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.745200    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:49.746626    3911 pod_ready.go:92] pod "kube-controller-manager-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:49.746670    3911 pod_ready.go:81] duration metric: took 403.181202ms for pod "kube-controller-manager-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.746679    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.942498    3911 request.go:629] Waited for 195.70501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m02
	I0722 03:56:49.942556    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m02
	I0722 03:56:49.942566    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.942576    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.942583    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.945821    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:50.141728    3911 request.go:629] Waited for 194.653258ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:50.141778    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:50.141788    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.141874    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.141884    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.144857    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:50.145401    3911 pod_ready.go:92] pod "kube-controller-manager-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:50.145413    3911 pod_ready.go:81] duration metric: took 398.731517ms for pod "kube-controller-manager-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.145421    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.342252    3911 request.go:629] Waited for 196.790992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m03
	I0722 03:56:50.342380    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m03
	I0722 03:56:50.342391    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.342402    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.342409    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.345338    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:50.541942    3911 request.go:629] Waited for 196.02759ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:50.542016    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:50.542024    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.542030    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.542035    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.543861    3911 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0722 03:56:50.543979    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:50.543991    3911 pod_ready.go:81] duration metric: took 398.575179ms for pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:50.543999    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:50.544007    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f92w" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.741981    3911 request.go:629] Waited for 197.931605ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f92w
	I0722 03:56:50.742035    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f92w
	I0722 03:56:50.742108    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.742123    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.742139    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.745292    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:50.941201    3911 request.go:629] Waited for 195.378005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m04
	I0722 03:56:50.941242    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m04
	I0722 03:56:50.941250    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.941279    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.941285    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.943392    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:50.943959    3911 pod_ready.go:92] pod "kube-proxy-8f92w" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:50.943968    3911 pod_ready.go:81] duration metric: took 399.965093ms for pod "kube-proxy-8f92w" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.943975    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8wl7h" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.141802    3911 request.go:629] Waited for 197.795735ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wl7h
	I0722 03:56:51.141881    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wl7h
	I0722 03:56:51.141889    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.141897    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.141901    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.144430    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:51.341886    3911 request.go:629] Waited for 196.964343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:51.341949    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:51.342008    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.342021    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.342042    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.345071    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:51.345563    3911 pod_ready.go:92] pod "kube-proxy-8wl7h" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:51.345575    3911 pod_ready.go:81] duration metric: took 401.60562ms for pod "kube-proxy-8wl7h" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.345584    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5kg7" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.541992    3911 request.go:629] Waited for 196.373771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5kg7
	I0722 03:56:51.542055    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5kg7
	I0722 03:56:51.542062    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.542069    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.542073    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.544061    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:51.741851    3911 request.go:629] Waited for 197.301001ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:51.741903    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:51.741920    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.741972    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.741981    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.744924    3911 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0722 03:56:51.745061    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-proxy-s5kg7" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:51.745083    3911 pod_ready.go:81] duration metric: took 399.503782ms for pod "kube-proxy-s5kg7" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:51.745093    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-proxy-s5kg7" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:51.745099    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xzpdq" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.942237    3911 request.go:629] Waited for 197.092533ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzpdq
	I0722 03:56:51.942331    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzpdq
	I0722 03:56:51.942339    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.942348    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.942352    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.944379    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:52.140792    3911 request.go:629] Waited for 195.988207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.140891    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.140898    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.140905    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.140908    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.143865    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:52.144152    3911 pod_ready.go:92] pod "kube-proxy-xzpdq" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:52.144162    3911 pod_ready.go:81] duration metric: took 399.065856ms for pod "kube-proxy-xzpdq" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.144174    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.341088    3911 request.go:629] Waited for 196.884909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000
	I0722 03:56:52.341120    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000
	I0722 03:56:52.341125    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.341131    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.341158    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.342922    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:52.541268    3911 request.go:629] Waited for 197.724279ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.541331    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.541336    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.541343    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.541348    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.543046    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:52.543447    3911 pod_ready.go:92] pod "kube-scheduler-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:52.543457    3911 pod_ready.go:81] duration metric: took 399.28772ms for pod "kube-scheduler-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.543466    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.741611    3911 request.go:629] Waited for 198.11239ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m02
	I0722 03:56:52.741678    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m02
	I0722 03:56:52.741684    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.741690    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.741694    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.743685    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:52.941884    3911 request.go:629] Waited for 197.596709ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:52.941966    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:52.941974    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.941983    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.941990    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.944672    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:52.944946    3911 pod_ready.go:92] pod "kube-scheduler-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:52.944957    3911 pod_ready.go:81] duration metric: took 401.495544ms for pod "kube-scheduler-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.944964    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:53.140781    3911 request.go:629] Waited for 195.779713ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m03
	I0722 03:56:53.140822    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m03
	I0722 03:56:53.140828    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.140846    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.140857    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.143259    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:53.340903    3911 request.go:629] Waited for 197.282616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:53.341040    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:53.341054    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.341066    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.341072    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.343900    3911 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0722 03:56:53.344052    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:53.344080    3911 pod_ready.go:81] duration metric: took 399.121362ms for pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:53.344087    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:53.344093    3911 pod_ready.go:38] duration metric: took 5.403478999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:56:53.344113    3911 api_server.go:52] waiting for apiserver process to appear ...
	I0722 03:56:53.344169    3911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 03:56:53.355872    3911 api_server.go:72] duration metric: took 14.406346458s to wait for apiserver process to appear ...
	I0722 03:56:53.355884    3911 api_server.go:88] waiting for apiserver healthz status ...
	I0722 03:56:53.355903    3911 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0722 03:56:53.360168    3911 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0722 03:56:53.360204    3911 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0722 03:56:53.360209    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.360215    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.360219    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.360847    3911 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0722 03:56:53.360928    3911 api_server.go:141] control plane version: v1.30.3
	I0722 03:56:53.360938    3911 api_server.go:131] duration metric: took 5.049309ms to wait for apiserver health ...
	I0722 03:56:53.360953    3911 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 03:56:53.540855    3911 request.go:629] Waited for 179.859471ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.540957    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.540968    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.540979    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.540985    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.546462    3911 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 03:56:53.551792    3911 system_pods.go:59] 26 kube-system pods found
	I0722 03:56:53.551807    3911 system_pods.go:61] "coredns-7db6d8ff4d-lf5mv" [cd051db1-dcbb-4fee-85d9-be13d1be38ec] Running
	I0722 03:56:53.551813    3911 system_pods.go:61] "coredns-7db6d8ff4d-mjc97" [ac1f1032-14ce-4c0c-b95b-a86bd4ef7810] Running
	I0722 03:56:53.551817    3911 system_pods.go:61] "etcd-ha-090000" [ec0787c7-a5cb-4375-b6c7-04e80160dbd9] Running
	I0722 03:56:53.551820    3911 system_pods.go:61] "etcd-ha-090000-m02" [70e6e1d6-208c-45b6-ad64-c10be5faedbb] Running
	I0722 03:56:53.551823    3911 system_pods.go:61] "etcd-ha-090000-m03" [ed74b70b-4483-4ac9-9db2-5c1507439fbf] Running
	I0722 03:56:53.551830    3911 system_pods.go:61] "kindnet-kqb2r" [58565238-777a-421f-a15d-38bd5daf596e] Running
	I0722 03:56:53.551834    3911 system_pods.go:61] "kindnet-lf6b4" [aadac04f-abbe-481b-accf-df0991b98748] Running
	I0722 03:56:53.551836    3911 system_pods.go:61] "kindnet-mqxjd" [439b0e4a-14b8-4556-9ae6-6a26590b6d5d] Running
	I0722 03:56:53.551839    3911 system_pods.go:61] "kindnet-xt575" [21e859c8-a102-4b48-ba9d-3b3902be8ba1] Running
	I0722 03:56:53.551842    3911 system_pods.go:61] "kube-apiserver-ha-090000" [c0377564-cef8-4807-8ab1-3fc6f2607591] Running
	I0722 03:56:53.551844    3911 system_pods.go:61] "kube-apiserver-ha-090000-m02" [87130092-7fea-4cf8-a1b4-b2b853d60334] Running
	I0722 03:56:53.551847    3911 system_pods.go:61] "kube-apiserver-ha-090000-m03" [056a2588-da71-4189-93cd-10a92f10d8d4] Running
	I0722 03:56:53.551850    3911 system_pods.go:61] "kube-controller-manager-ha-090000" [89cfb4c4-8d84-42f2-bae3-3962aada627b] Running
	I0722 03:56:53.551853    3911 system_pods.go:61] "kube-controller-manager-ha-090000-m02" [9173940b-a550-4f67-b37c-78e456b18a13] Running
	I0722 03:56:53.551855    3911 system_pods.go:61] "kube-controller-manager-ha-090000-m03" [75846dcb-f9d9-46c6-8eaa-857c3da39b9a] Running
	I0722 03:56:53.551858    3911 system_pods.go:61] "kube-proxy-8f92w" [10da7b52-073d-40c9-87ea-8484d68147e3] Running
	I0722 03:56:53.551861    3911 system_pods.go:61] "kube-proxy-8wl7h" [210fb608-afcf-4f5c-9b75-cc949c268854] Running
	I0722 03:56:53.551864    3911 system_pods.go:61] "kube-proxy-s5kg7" [8513335b-221c-4602-9aaa-b1e85b828bb4] Running
	I0722 03:56:53.551866    3911 system_pods.go:61] "kube-proxy-xzpdq" [d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7] Running
	I0722 03:56:53.551869    3911 system_pods.go:61] "kube-scheduler-ha-090000" [82031515-de24-4248-97ff-2bb892974db3] Running
	I0722 03:56:53.551872    3911 system_pods.go:61] "kube-scheduler-ha-090000-m02" [2f042e46-2b51-4b25-b94a-c22dde65c7fa] Running
	I0722 03:56:53.551874    3911 system_pods.go:61] "kube-scheduler-ha-090000-m03" [bf7cca91-4911-4f81-bde0-cbb089bd2fd2] Running
	I0722 03:56:53.551877    3911 system_pods.go:61] "kube-vip-ha-090000" [46ed0197-35a7-40cd-8480-0e66a09d4d69] Running
	I0722 03:56:53.551880    3911 system_pods.go:61] "kube-vip-ha-090000-m02" [b6025cfc-c08e-4981-b1b6-4f26ba5d5538] Running
	I0722 03:56:53.551882    3911 system_pods.go:61] "kube-vip-ha-090000-m03" [e7bc337b-5f22-4c55-86cb-1417b15343bd] Running
	I0722 03:56:53.551885    3911 system_pods.go:61] "storage-provisioner" [c1214845-bf0e-4808-9e11-faf18dd3cb3f] Running
	I0722 03:56:53.551889    3911 system_pods.go:74] duration metric: took 190.935916ms to wait for pod list to return data ...
	I0722 03:56:53.551895    3911 default_sa.go:34] waiting for default service account to be created ...
	I0722 03:56:53.741633    3911 request.go:629] Waited for 189.696516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0722 03:56:53.741686    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0722 03:56:53.741703    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.741714    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.741724    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.744889    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:53.745045    3911 default_sa.go:45] found service account: "default"
	I0722 03:56:53.745059    3911 default_sa.go:55] duration metric: took 193.164449ms for default service account to be created ...
	I0722 03:56:53.745066    3911 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 03:56:53.941905    3911 request.go:629] Waited for 196.736167ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.941953    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.941965    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.941979    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.941986    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.947853    3911 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 03:56:53.953138    3911 system_pods.go:86] 26 kube-system pods found
	I0722 03:56:53.953150    3911 system_pods.go:89] "coredns-7db6d8ff4d-lf5mv" [cd051db1-dcbb-4fee-85d9-be13d1be38ec] Running
	I0722 03:56:53.953154    3911 system_pods.go:89] "coredns-7db6d8ff4d-mjc97" [ac1f1032-14ce-4c0c-b95b-a86bd4ef7810] Running
	I0722 03:56:53.953158    3911 system_pods.go:89] "etcd-ha-090000" [ec0787c7-a5cb-4375-b6c7-04e80160dbd9] Running
	I0722 03:56:53.953161    3911 system_pods.go:89] "etcd-ha-090000-m02" [70e6e1d6-208c-45b6-ad64-c10be5faedbb] Running
	I0722 03:56:53.953164    3911 system_pods.go:89] "etcd-ha-090000-m03" [ed74b70b-4483-4ac9-9db2-5c1507439fbf] Running
	I0722 03:56:53.953167    3911 system_pods.go:89] "kindnet-kqb2r" [58565238-777a-421f-a15d-38bd5daf596e] Running
	I0722 03:56:53.953171    3911 system_pods.go:89] "kindnet-lf6b4" [aadac04f-abbe-481b-accf-df0991b98748] Running
	I0722 03:56:53.953174    3911 system_pods.go:89] "kindnet-mqxjd" [439b0e4a-14b8-4556-9ae6-6a26590b6d5d] Running
	I0722 03:56:53.953176    3911 system_pods.go:89] "kindnet-xt575" [21e859c8-a102-4b48-ba9d-3b3902be8ba1] Running
	I0722 03:56:53.953179    3911 system_pods.go:89] "kube-apiserver-ha-090000" [c0377564-cef8-4807-8ab1-3fc6f2607591] Running
	I0722 03:56:53.953182    3911 system_pods.go:89] "kube-apiserver-ha-090000-m02" [87130092-7fea-4cf8-a1b4-b2b853d60334] Running
	I0722 03:56:53.953185    3911 system_pods.go:89] "kube-apiserver-ha-090000-m03" [056a2588-da71-4189-93cd-10a92f10d8d4] Running
	I0722 03:56:53.953189    3911 system_pods.go:89] "kube-controller-manager-ha-090000" [89cfb4c4-8d84-42f2-bae3-3962aada627b] Running
	I0722 03:56:53.953192    3911 system_pods.go:89] "kube-controller-manager-ha-090000-m02" [9173940b-a550-4f67-b37c-78e456b18a13] Running
	I0722 03:56:53.953195    3911 system_pods.go:89] "kube-controller-manager-ha-090000-m03" [75846dcb-f9d9-46c6-8eaa-857c3da39b9a] Running
	I0722 03:56:53.953199    3911 system_pods.go:89] "kube-proxy-8f92w" [10da7b52-073d-40c9-87ea-8484d68147e3] Running
	I0722 03:56:53.953203    3911 system_pods.go:89] "kube-proxy-8wl7h" [210fb608-afcf-4f5c-9b75-cc949c268854] Running
	I0722 03:56:53.953206    3911 system_pods.go:89] "kube-proxy-s5kg7" [8513335b-221c-4602-9aaa-b1e85b828bb4] Running
	I0722 03:56:53.953209    3911 system_pods.go:89] "kube-proxy-xzpdq" [d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7] Running
	I0722 03:56:53.953214    3911 system_pods.go:89] "kube-scheduler-ha-090000" [82031515-de24-4248-97ff-2bb892974db3] Running
	I0722 03:56:53.953219    3911 system_pods.go:89] "kube-scheduler-ha-090000-m02" [2f042e46-2b51-4b25-b94a-c22dde65c7fa] Running
	I0722 03:56:53.953222    3911 system_pods.go:89] "kube-scheduler-ha-090000-m03" [bf7cca91-4911-4f81-bde0-cbb089bd2fd2] Running
	I0722 03:56:53.953226    3911 system_pods.go:89] "kube-vip-ha-090000" [46ed0197-35a7-40cd-8480-0e66a09d4d69] Running
	I0722 03:56:53.953229    3911 system_pods.go:89] "kube-vip-ha-090000-m02" [b6025cfc-c08e-4981-b1b6-4f26ba5d5538] Running
	I0722 03:56:53.953232    3911 system_pods.go:89] "kube-vip-ha-090000-m03" [e7bc337b-5f22-4c55-86cb-1417b15343bd] Running
	I0722 03:56:53.953235    3911 system_pods.go:89] "storage-provisioner" [c1214845-bf0e-4808-9e11-faf18dd3cb3f] Running
	I0722 03:56:53.953241    3911 system_pods.go:126] duration metric: took 208.1764ms to wait for k8s-apps to be running ...
	I0722 03:56:53.953247    3911 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 03:56:53.953298    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 03:56:53.964081    3911 system_svc.go:56] duration metric: took 10.830617ms WaitForService to wait for kubelet
	I0722 03:56:53.964094    3911 kubeadm.go:582] duration metric: took 15.014585328s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 03:56:53.964109    3911 node_conditions.go:102] verifying NodePressure condition ...
	I0722 03:56:54.141596    3911 request.go:629] Waited for 177.455634ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0722 03:56:54.141627    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0722 03:56:54.141632    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:54.141645    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:54.141650    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:54.156645    3911 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0722 03:56:54.157279    3911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:56:54.157291    3911 node_conditions.go:123] node cpu capacity is 2
	I0722 03:56:54.157302    3911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:56:54.157305    3911 node_conditions.go:123] node cpu capacity is 2
	I0722 03:56:54.157309    3911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:56:54.157315    3911 node_conditions.go:123] node cpu capacity is 2
	I0722 03:56:54.157319    3911 node_conditions.go:105] duration metric: took 193.210914ms to run NodePressure ...
	I0722 03:56:54.157327    3911 start.go:241] waiting for startup goroutines ...
	I0722 03:56:54.157344    3911 start.go:255] writing updated cluster config ...
	I0722 03:56:54.178247    3911 out.go:177] 
	I0722 03:56:54.215301    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:54.215427    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:56:54.237875    3911 out.go:177] * Starting "ha-090000-m04" worker node in "ha-090000" cluster
	I0722 03:56:54.313643    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:56:54.313672    3911 cache.go:56] Caching tarball of preloaded images
	I0722 03:56:54.313891    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 03:56:54.313909    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:56:54.314031    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:56:54.314743    3911 start.go:360] acquireMachinesLock for ha-090000-m04: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:56:54.314865    3911 start.go:364] duration metric: took 97.548µs to acquireMachinesLock for "ha-090000-m04"
	I0722 03:56:54.314900    3911 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:56:54.314909    3911 fix.go:54] fixHost starting: m04
	I0722 03:56:54.315362    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:56:54.315392    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:56:54.324846    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52004
	I0722 03:56:54.325299    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:56:54.325696    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:56:54.325717    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:56:54.325994    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:56:54.326143    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:56:54.326258    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetState
	I0722 03:56:54.326348    3911 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:54.326459    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid from json: 3802
	I0722 03:56:54.327677    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid 3802 missing from process table
	I0722 03:56:54.327712    3911 fix.go:112] recreateIfNeeded on ha-090000-m04: state=Stopped err=<nil>
	I0722 03:56:54.327724    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	W0722 03:56:54.327832    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:56:54.347991    3911 out.go:177] * Restarting existing hyperkit VM for "ha-090000-m04" ...
	I0722 03:56:54.405790    3911 main.go:141] libmachine: (ha-090000-m04) Calling .Start
	I0722 03:56:54.406014    3911 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:54.406069    3911 main.go:141] libmachine: (ha-090000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid
	I0722 03:56:54.407060    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid 3802 missing from process table
	I0722 03:56:54.407069    3911 main.go:141] libmachine: (ha-090000-m04) DBG | pid 3802 is in state "Stopped"
	I0722 03:56:54.407087    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid...
	I0722 03:56:54.407246    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Using UUID f13599ad-3762-43bd-a5c6-6cfffb7afaca
	I0722 03:56:54.437806    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Generated MAC ca:7d:32:d9:5d:55
	I0722 03:56:54.437841    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000
	I0722 03:56:54.437986    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f13599ad-3762-43bd-a5c6-6cfffb7afaca", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:56:54.438025    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f13599ad-3762-43bd-a5c6-6cfffb7afaca", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:56:54.438089    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f13599ad-3762-43bd-a5c6-6cfffb7afaca", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/ha-090000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machine
s/ha-090000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"}
	I0722 03:56:54.438135    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f13599ad-3762-43bd-a5c6-6cfffb7afaca -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/ha-090000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"
	I0722 03:56:54.438159    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 03:56:54.439735    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Pid is 3973
	I0722 03:56:54.440437    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Attempt 0
	I0722 03:56:54.440473    3911 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:54.440546    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid from json: 3973
	I0722 03:56:54.443188    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Searching for ca:7d:32:d9:5d:55 in /var/db/dhcpd_leases ...
	I0722 03:56:54.443309    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0722 03:56:54.443345    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8c45}
	I0722 03:56:54.443358    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 03:56:54.443395    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 03:56:54.443440    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8bc8}
	I0722 03:56:54.443458    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Found match: ca:7d:32:d9:5d:55
	I0722 03:56:54.443482    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetConfigRaw
	I0722 03:56:54.443506    3911 main.go:141] libmachine: (ha-090000-m04) DBG | IP: 192.169.0.8
	I0722 03:56:54.444347    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:56:54.444653    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:56:54.445364    3911 machine.go:94] provisionDockerMachine start ...
	I0722 03:56:54.445380    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:56:54.445624    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:56:54.445766    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:56:54.445925    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:56:54.446085    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:56:54.446269    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:56:54.446478    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:54.446750    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:56:54.446762    3911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:56:54.450021    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 03:56:54.474479    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 03:56:54.475620    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:56:54.475643    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:56:54.475657    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:56:54.475667    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:56:54.866202    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 03:56:54.866218    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 03:56:54.981166    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:56:54.981182    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:56:54.981189    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:56:54.981195    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:56:54.982030    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 03:56:54.982040    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 03:57:00.347122    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 03:57:00.347199    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 03:57:00.347212    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 03:57:00.370939    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 03:57:29.507146    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 03:57:29.507164    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetMachineName
	I0722 03:57:29.507326    3911 buildroot.go:166] provisioning hostname "ha-090000-m04"
	I0722 03:57:29.507337    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetMachineName
	I0722 03:57:29.507436    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.507532    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.507631    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.507730    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.507816    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.507942    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.508105    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.508119    3911 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-090000-m04 && echo "ha-090000-m04" | sudo tee /etc/hostname
	I0722 03:57:29.566504    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-090000-m04
	
	I0722 03:57:29.566520    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.566676    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.566768    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.566861    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.566958    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.567095    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.567238    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.567250    3911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-090000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-090000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-090000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:57:29.622448    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:57:29.622463    3911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 03:57:29.622472    3911 buildroot.go:174] setting up certificates
	I0722 03:57:29.622479    3911 provision.go:84] configureAuth start
	I0722 03:57:29.622486    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetMachineName
	I0722 03:57:29.622644    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:57:29.622751    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.622856    3911 provision.go:143] copyHostCerts
	I0722 03:57:29.622886    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:57:29.622945    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 03:57:29.622952    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:57:29.623163    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 03:57:29.623368    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:57:29.623410    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 03:57:29.623415    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:57:29.623495    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 03:57:29.623640    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:57:29.623679    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 03:57:29.623684    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:57:29.623770    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 03:57:29.623918    3911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.ha-090000-m04 san=[127.0.0.1 192.169.0.8 ha-090000-m04 localhost minikube]
	I0722 03:57:29.798481    3911 provision.go:177] copyRemoteCerts
	I0722 03:57:29.798536    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:57:29.798553    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.798720    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.798832    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.798934    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.799034    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:29.828994    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 03:57:29.829071    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:57:29.849145    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 03:57:29.849216    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 03:57:29.868964    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 03:57:29.869035    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 03:57:29.889770    3911 provision.go:87] duration metric: took 267.289907ms to configureAuth
	I0722 03:57:29.889784    3911 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:57:29.889952    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:57:29.889967    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:29.890101    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.890199    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.890275    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.890367    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.890452    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.890562    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.890690    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.890698    3911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:57:29.941114    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:57:29.941126    3911 buildroot.go:70] root file system type: tmpfs
	I0722 03:57:29.941203    3911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:57:29.941214    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.941336    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.941424    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.941505    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.941596    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.941717    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.941859    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.941908    3911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:57:29.999626    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:57:29.999643    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.999785    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.999874    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.999968    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:30.000060    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:30.000202    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:30.000354    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:30.000367    3911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:57:31.614623    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 03:57:31.614645    3911 machine.go:97] duration metric: took 37.170271356s to provisionDockerMachine
	I0722 03:57:31.614654    3911 start.go:293] postStartSetup for "ha-090000-m04" (driver="hyperkit")
	I0722 03:57:31.614661    3911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:57:31.614672    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.614863    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:57:31.614878    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.614977    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.615074    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.615173    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.615258    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:31.646689    3911 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:57:31.649952    3911 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:57:31.649963    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 03:57:31.650063    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 03:57:31.650246    3911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 03:57:31.650252    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /etc/ssl/certs/16372.pem
	I0722 03:57:31.650455    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 03:57:31.658413    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:57:31.678576    3911 start.go:296] duration metric: took 63.915273ms for postStartSetup
	I0722 03:57:31.678597    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.678768    3911 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 03:57:31.678782    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.678870    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.678960    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.679037    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.679115    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:31.710161    3911 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0722 03:57:31.710221    3911 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0722 03:57:31.764084    3911 fix.go:56] duration metric: took 37.450180093s for fixHost
	I0722 03:57:31.764110    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.764259    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.764351    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.764456    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.764557    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.764680    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:31.764822    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:31.764829    3911 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 03:57:31.816488    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645852.004461870
	
	I0722 03:57:31.816502    3911 fix.go:216] guest clock: 1721645852.004461870
	I0722 03:57:31.816508    3911 fix.go:229] Guest: 2024-07-22 03:57:32.00446187 -0700 PDT Remote: 2024-07-22 03:57:31.764099 -0700 PDT m=+137.801419594 (delta=240.36287ms)
	I0722 03:57:31.816522    3911 fix.go:200] guest clock delta is within tolerance: 240.36287ms
	I0722 03:57:31.816527    3911 start.go:83] releasing machines lock for "ha-090000-m04", held for 37.50265184s
	I0722 03:57:31.816545    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.816680    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:57:31.839252    3911 out.go:177] * Found network options:
	I0722 03:57:31.860719    3911 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0722 03:57:31.881811    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 03:57:31.881829    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:57:31.881843    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.882321    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.882463    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.882549    3911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:57:31.882589    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	W0722 03:57:31.882613    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 03:57:31.882631    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:57:31.882716    3911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 03:57:31.882718    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.882733    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.882836    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.882856    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.882964    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.883010    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.883091    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.883141    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:31.883196    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	W0722 03:57:31.910458    3911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:57:31.910515    3911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:57:31.960457    3911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 03:57:31.960475    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:57:31.960567    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:57:31.976097    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:57:31.984637    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:57:31.992923    3911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:57:31.992964    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:57:32.001492    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:57:32.009758    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:57:32.018152    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:57:32.026574    3911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:57:32.034947    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:57:32.043182    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:57:32.051485    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:57:32.059820    3911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:57:32.067251    3911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:57:32.074803    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:57:32.169893    3911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:57:32.188393    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:57:32.188465    3911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:57:32.206602    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:57:32.223241    3911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:57:32.241086    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:57:32.252378    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:57:32.263494    3911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 03:57:32.285713    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:57:32.296269    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:57:32.311089    3911 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:57:32.314143    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:57:32.321424    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:57:32.335207    3911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:57:32.429597    3911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:57:32.542464    3911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:57:32.542490    3911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:57:32.557136    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:57:32.660326    3911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:58:33.699453    3911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.040752064s)
	I0722 03:58:33.699525    3911 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0722 03:58:33.734950    3911 out.go:177] 
	W0722 03:58:33.756536    3911 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 10:57:29 ha-090000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.446112727Z" level=info msg="Starting up"
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.446594219Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.447194660Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.462050990Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476816092Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476858837Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476899215Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476909407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477031508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477068105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477176376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477210709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477222939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477230881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477351816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477553357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479128485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479167134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479271300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479304705Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479417021Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479458809Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481448117Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481494900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481508142Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481517623Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481527464Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481569984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481744950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481852966Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481872403Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481907193Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481919076Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481928860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481936657Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481955520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481967273Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481975440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481983423Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481991104Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482004822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482014286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482022158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482030329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482040470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482053851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482064290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482072410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482080983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482093264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482100888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482108346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482115856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482130159Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482146190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482154580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482161596Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482209554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482243396Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482253257Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482261382Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482267623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482276094Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482285841Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482429840Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482484213Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482510048Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482541660Z" level=info msg="containerd successfully booted in 0.021090s"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.467405362Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.479322696Z" level=info msg="Loading containers: start."
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.599220957Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.665815288Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.771955379Z" level=warning msg="error locating sandbox id 023e4273edcd40723038879300e7321a9aec3901cb772dbfe3c38850836b1315: sandbox 023e4273edcd40723038879300e7321a9aec3901cb772dbfe3c38850836b1315 not found"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.772061725Z" level=info msg="Loading containers: done."
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.779357823Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.779511676Z" level=info msg="Daemon has completed initialization"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.801250223Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.801353911Z" level=info msg="API listen on [::]:2376"
	Jul 22 10:57:31 ha-090000-m04 systemd[1]: Started Docker Application Container Engine.
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.860896719Z" level=info msg="Processing signal 'terminated'"
	Jul 22 10:57:32 ha-090000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862255865Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862561859Z" level=info msg="Daemon shutdown complete"
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862690583Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862732129Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 10:57:33 ha-090000-m04 dockerd[1100]: time="2024-07-22T10:57:33.897261523Z" level=info msg="Starting up"
	Jul 22 10:58:33 ha-090000-m04 dockerd[1100]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0722 03:58:33.756659    3911 out.go:239] * 
	W0722 03:58:33.757887    3911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 03:58:33.836490    3911 out.go:177] 
	
	
	==> Docker <==
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.322254347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.322411712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.322506665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.324060847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 cri-dockerd[1362]: time="2024-07-22T10:57:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7fa5abbbfcc70888391d1fe46cf13ea2dd225349b0b899c6f8e60fd6b585bd3a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.381899070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.382048336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.382062675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.382185963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.433062434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.433701381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.433819122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.434274603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.614154636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.614280634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.614291998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.614661459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:54 ha-090000 dockerd[1108]: time="2024-07-22T10:57:54.856709477Z" level=info msg="ignoring event" container=ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 10:57:54 ha-090000 dockerd[1114]: time="2024-07-22T10:57:54.856997797Z" level=info msg="shim disconnected" id=ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d namespace=moby
	Jul 22 10:57:54 ha-090000 dockerd[1114]: time="2024-07-22T10:57:54.857029303Z" level=warning msg="cleaning up after shim disconnected" id=ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d namespace=moby
	Jul 22 10:57:54 ha-090000 dockerd[1114]: time="2024-07-22T10:57:54.857035486Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 10:58:10 ha-090000 dockerd[1114]: time="2024-07-22T10:58:10.369414842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:58:10 ha-090000 dockerd[1114]: time="2024-07-22T10:58:10.369475705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:58:10 ha-090000 dockerd[1114]: time="2024-07-22T10:58:10.369515025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:58:10 ha-090000 dockerd[1114]: time="2024-07-22T10:58:10.369868470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	3c38b6dfe3f08       6e38f40d628db       25 seconds ago       Running             storage-provisioner       3                   63b37a4b34936       storage-provisioner
	dbee947401e9e       8c811b4aec35f       About a minute ago   Running             busybox                   2                   7fa5abbbfcc70       busybox-fc5497c4f-2tcf2
	421317be1b454       6f1d07c71fa0f       About a minute ago   Running             kindnet-cni               2                   f70ae8f7b153f       kindnet-mqxjd
	22d788aa28349       cbb01a7bd410d       About a minute ago   Running             coredns                   2                   4e646db94a0f3       coredns-7db6d8ff4d-lf5mv
	ea06caf73a7d0       6e38f40d628db       About a minute ago   Exited              storage-provisioner       2                   63b37a4b34936       storage-provisioner
	6a1a698341695       cbb01a7bd410d       About a minute ago   Running             coredns                   2                   af9872ab6752e       coredns-7db6d8ff4d-mjc97
	9ea9aba3e1e98       55bb025d2cfa5       About a minute ago   Running             kube-proxy                2                   372499e41b533       kube-proxy-xzpdq
	38dfb2ab5697d       76932a3b37d7e       About a minute ago   Running             kube-controller-manager   4                   696d1720743f7       kube-controller-manager-ha-090000
	945dd2cdb8d5e       1f6d574d502f3       About a minute ago   Running             kube-apiserver            4                   f90e22d71e804       kube-apiserver-ha-090000
	d4bee2dc89b59       38af8ddebf499       2 minutes ago        Running             kube-vip                  1                   ed980c36ff3a0       kube-vip-ha-090000
	cbe7a7a54b053       3edc18e7b7672       2 minutes ago        Running             kube-scheduler            2                   060bad469022e       kube-scheduler-ha-090000
	288b4db4b4674       3861cfcd7c04c       2 minutes ago        Running             etcd                      2                   13882f0cb79d3       etcd-ha-090000
	0469220f71ca8       76932a3b37d7e       2 minutes ago        Exited              kube-controller-manager   3                   696d1720743f7       kube-controller-manager-ha-090000
	4b11d2fc0144c       1f6d574d502f3       2 minutes ago        Exited              kube-apiserver            3                   f90e22d71e804       kube-apiserver-ha-090000
	55fbc8e5d31b7       cbb01a7bd410d       5 minutes ago        Exited              coredns                   1                   ee6d0b35bdb3e       coredns-7db6d8ff4d-lf5mv
	1138d893c2d9d       cbb01a7bd410d       6 minutes ago        Exited              coredns                   1                   b7d38b6fa5afe       coredns-7db6d8ff4d-mjc97
	0cf43afb12ba9       6f1d07c71fa0f       6 minutes ago        Exited              kindnet-cni               1                   c2c5f6c134990       kindnet-mqxjd
	391ccb3367a92       55bb025d2cfa5       6 minutes ago        Exited              kube-proxy                1                   a7ddfdc244624       kube-proxy-xzpdq
	c354917eb9a7f       8c811b4aec35f       6 minutes ago        Exited              busybox                   1                   4b6299052dfcb       busybox-fc5497c4f-2tcf2
	b156ed53a712c       38af8ddebf499       7 minutes ago        Exited              kube-vip                  0                   403e3036bbfc3       kube-vip-ha-090000
	13f15d0cc8b35       3861cfcd7c04c       7 minutes ago        Exited              etcd                      1                   d23a99af3047f       etcd-ha-090000
	2c775554c943e       3edc18e7b7672       7 minutes ago        Exited              kube-scheduler            1                   d552ca73d0455       kube-scheduler-ha-090000
	
	
	==> coredns [1138d893c2d9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33736 - 25278 "HINFO IN 2232067124097066746.5321966554492967552. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017294568s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [22d788aa2834] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60421 - 52053 "HINFO IN 2117351152882643557.306907224904004981. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.0117126s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1646561162]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.636) (total time: 30001ms):
	Trace[1646561162]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:57:54.637)
	Trace[1646561162]: [30.001499419s] [30.001499419s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[204250251]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.637) (total time: 30003ms):
	Trace[204250251]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:57:54.638)
	Trace[204250251]: [30.003162507s] [30.003162507s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1668393]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.637) (total time: 30003ms):
	Trace[1668393]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:57:54.638)
	Trace[1668393]: [30.003053761s] [30.003053761s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [55fbc8e5d31b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50575 - 59902 "HINFO IN 3988656002558365066.2402106395491727482. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01065485s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a1a69834169] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55362 - 47981 "HINFO IN 3732672677383048017.8374754956493277366. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011991665s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[975084044]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.636) (total time: 30002ms):
	Trace[975084044]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:57:54.637)
	Trace[975084044]: [30.002362908s] [30.002362908s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[901565610]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.635) (total time: 30004ms):
	Trace[901565610]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (10:57:54.638)
	Trace[901565610]: [30.004123041s] [30.004123041s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[882114494]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.636) (total time: 30003ms):
	Trace[882114494]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:57:54.637)
	Trace[882114494]: [30.003053378s] [30.003053378s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-090000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-090000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-090000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T03_43_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:43:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-090000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:58:35 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:56:52 +0000   Mon, 22 Jul 2024 10:43:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:56:52 +0000   Mon, 22 Jul 2024 10:43:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:56:52 +0000   Mon, 22 Jul 2024 10:43:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:56:52 +0000   Mon, 22 Jul 2024 10:43:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-090000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e090de7f3c6a411da7987789cad7e565
	  System UUID:                865e4f09-0000-0000-8c93-9ca2b7f6f541
	  Boot ID:                    303932c9-04d5-4f3a-ad0c-ae1b2083c258
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2tcf2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 coredns-7db6d8ff4d-lf5mv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-7db6d8ff4d-mjc97             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-ha-090000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-mqxjd                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-090000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-090000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-xzpdq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-090000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-090000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m32s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m4s                   kube-proxy       
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 70s                    kube-proxy       
	  Normal  NodeHasNoDiskPressure    15m                    kubelet          Node ha-090000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m                    kubelet          Node ha-090000 status is now: NodeHasSufficientPID
	  Normal  Starting                 15m                    kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  15m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  15m                    kubelet          Node ha-090000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           15m                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  NodeReady                14m                    kubelet          Node ha-090000 status is now: NodeReady
	  Normal  RegisteredNode           14m                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           8m50s                  node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  Starting                 7m42s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m42s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasNoDiskPressure    7m41s (x8 over 7m42s)  kubelet          Node ha-090000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  7m41s (x8 over 7m42s)  kubelet          Node ha-090000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     7m41s (x7 over 7m42s)  kubelet          Node ha-090000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           6m37s                  node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           6m36s                  node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           6m9s                   node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  NodeHasSufficientPID     2m39s (x7 over 2m39s)  kubelet          Node ha-090000 status is now: NodeHasSufficientPID
	  Normal  Starting                 2m39s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m39s (x8 over 2m39s)  kubelet          Node ha-090000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    2m39s (x8 over 2m39s)  kubelet          Node ha-090000 status is now: NodeHasNoDiskPressure
	  Normal  NodeAllocatableEnforced  2m39s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           95s                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           93s                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	
	
	Name:               ha-090000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-090000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-090000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T03_44_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:44:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-090000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:58:31 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:56:49 +0000   Mon, 22 Jul 2024 10:44:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:56:49 +0000   Mon, 22 Jul 2024 10:44:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:56:49 +0000   Mon, 22 Jul 2024 10:44:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:56:49 +0000   Mon, 22 Jul 2024 10:44:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-090000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 653da24da8184d5683495d77f2663655
	  System UUID:                a2384298-0000-0000-98be-9d336c163b01
	  Boot ID:                    3ccf92f6-5554-420f-a8c6-c419f6124a20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8n2c6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 etcd-ha-090000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-xt575                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-090000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-090000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-8wl7h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-090000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-090000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 99s                    kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   Starting                 9m4s                   kube-proxy       
	  Normal   Starting                 6m44s                  kube-proxy       
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-090000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-090000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-090000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           14m                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           12m                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Warning  Rebooted                 9m9s                   kubelet          Node ha-090000-m02 has been rebooted, boot id: 296c4679-5b51-4230-a93d-85c12fa46a6b
	  Normal   NodeHasSufficientPID     9m9s                   kubelet          Node ha-090000-m02 status is now: NodeHasSufficientPID
	  Normal   Starting                 9m9s                   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  9m9s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  9m9s                   kubelet          Node ha-090000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    9m9s                   kubelet          Node ha-090000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           8m50s                  node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   NodeHasSufficientPID     6m59s (x7 over 6m59s)  kubelet          Node ha-090000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  6m59s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    6m59s (x8 over 6m59s)  kubelet          Node ha-090000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 6m59s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  6m59s (x8 over 6m59s)  kubelet          Node ha-090000-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           6m37s                  node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           6m36s                  node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           6m9s                   node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   Starting                 116s                   kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  116s (x8 over 116s)    kubelet          Node ha-090000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    116s (x8 over 116s)    kubelet          Node ha-090000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     116s (x7 over 116s)    kubelet          Node ha-090000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  116s                   kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           95s                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           93s                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	
	
	Name:               ha-090000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-090000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-090000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T03_48_19_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:48:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-090000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:54:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Jul 2024 10:54:34 +0000   Mon, 22 Jul 2024 10:57:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Jul 2024 10:54:34 +0000   Mon, 22 Jul 2024 10:57:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Jul 2024 10:54:34 +0000   Mon, 22 Jul 2024 10:57:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Jul 2024 10:54:34 +0000   Mon, 22 Jul 2024 10:57:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-090000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0ea4e5276974bf88752eb7d59d19d28
	  System UUID:                f13543bd-0000-0000-a5c6-6cfffb7afaca
	  Boot ID:                    efc37e64-414b-4a11-8b92-5afe32b46caa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xsl6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m54s
	  kube-system                 kindnet-kqb2r              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-8f92w           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 10m                  kube-proxy       
	  Normal   Starting                 4m                   kube-proxy       
	  Normal   NodeHasSufficientPID     10m (x2 over 10m)    kubelet          Node ha-090000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    10m (x2 over 10m)    kubelet          Node ha-090000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  10m (x2 over 10m)    kubelet          Node ha-090000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  10m                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           10m                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           10m                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   NodeReady                9m53s                kubelet          Node ha-090000-m04 status is now: NodeReady
	  Normal   RegisteredNode           8m50s                node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           6m37s                node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           6m36s                node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           6m9s                 node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   NodeNotReady             5m57s                node-controller  Node ha-090000-m04 status is now: NodeNotReady
	  Normal   Starting                 4m1s                 kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  4m1s                 kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  4m1s (x2 over 4m1s)  kubelet          Node ha-090000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    4m1s (x2 over 4m1s)  kubelet          Node ha-090000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     4m1s (x2 over 4m1s)  kubelet          Node ha-090000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 4m1s                 kubelet          Node ha-090000-m04 has been rebooted, boot id: efc37e64-414b-4a11-8b92-5afe32b46caa
	  Normal   NodeReady                4m1s                 kubelet          Node ha-090000-m04 status is now: NodeReady
	  Normal   RegisteredNode           95s                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           93s                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   NodeNotReady             55s                  node-controller  Node ha-090000-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035617] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007975] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.373871] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007077] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.539212] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.228184] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.988715] systemd-fstab-generator[494]: Ignoring "noauto" option for root device
	[  +0.107475] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
	[  +1.936648] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	[  +0.257907] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +0.103829] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.114013] systemd-fstab-generator[1100]: Ignoring "noauto" option for root device
	[  +2.455872] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.050301] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.044507] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.113079] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[  +0.126479] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.424114] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[Jul22 10:56] kauditd_printk_skb: 110 callbacks suppressed
	[ +21.703375] kauditd_printk_skb: 40 callbacks suppressed
	[Jul22 10:57] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [13f15d0cc8b3] <==
	{"level":"warn","ts":"2024-07-22T10:55:06.173331Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:54:59.683443Z","time spent":"6.489887603s","remote":"127.0.0.1:56816","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/22 10:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:55:06.17339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.16404297s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-22T10:55:06.173403Z","caller":"traceutil/trace.go:171","msg":"trace[1906020764] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; }","duration":"5.164078582s","start":"2024-07-22T10:55:01.00932Z","end":"2024-07-22T10:55:06.173399Z","steps":["trace[1906020764] 'agreement among raft nodes before linearized reading'  (duration: 5.164064047s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:55:06.173414Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:55:01.009308Z","time spent":"5.164102318s","remote":"127.0.0.1:56742","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":0,"response size":0,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true "}
	2024/07/22 10:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:55:06.173464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:55:01.643069Z","time spent":"4.530393805s","remote":"127.0.0.1:56816","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/22 10:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:55:06.173512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.532556119s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-22T10:55:06.173523Z","caller":"traceutil/trace.go:171","msg":"trace[1235589650] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; }","duration":"1.532569274s","start":"2024-07-22T10:55:04.640951Z","end":"2024-07-22T10:55:06.17352Z","steps":["trace[1235589650] 'agreement among raft nodes before linearized reading'  (duration: 1.532556007s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:55:06.173549Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:55:04.640945Z","time spent":"1.53259879s","remote":"127.0.0.1:56928","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "}
	2024/07/22 10:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:55:06.197209Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:55:06.197255Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T10:55:06.198761Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-22T10:55:06.198873Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.198885Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.198901Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.198951Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.198977Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.199001Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.19901Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.201088Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-22T10:55:06.201174Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-22T10:55:06.201202Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-090000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [288b4db4b467] <==
	{"level":"info","ts":"2024-07-22T10:56:46.463203Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"5ef48be478f2a308","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-22T10:56:46.463492Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:56:46.939701Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 is starting a new election at term 3"}
	{"level":"info","ts":"2024-07-22T10:56:46.939749Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-07-22T10:56:46.93976Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from b8c6c7563d17d844 at term 3"}
	{"level":"info","ts":"2024-07-22T10:56:46.939781Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 3358] sent MsgPreVote request to 5ef48be478f2a308 at term 3"}
	{"level":"info","ts":"2024-07-22T10:56:46.941396Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgPreVoteResp from 5ef48be478f2a308 at term 3"}
	{"level":"info","ts":"2024-07-22T10:56:46.941431Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgPreVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-07-22T10:56:46.941441Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became candidate at term 4"}
	{"level":"info","ts":"2024-07-22T10:56:46.941445Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-07-22T10:56:46.941459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 [logterm: 3, index: 3358] sent MsgVote request to 5ef48be478f2a308 at term 4"}
	{"level":"info","ts":"2024-07-22T10:56:46.945846Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 received MsgVoteResp from 5ef48be478f2a308 at term 4"}
	{"level":"info","ts":"2024-07-22T10:56:46.945888Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 has received 2 MsgVoteResp votes and 0 vote rejections"}
	{"level":"info","ts":"2024-07-22T10:56:46.9459Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 became leader at term 4"}
	{"level":"info","ts":"2024-07-22T10:56:46.945906Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: b8c6c7563d17d844 elected leader b8c6c7563d17d844 at term 4"}
	{"level":"info","ts":"2024-07-22T10:56:46.957132Z","caller":"etcdserver/server.go:2068","msg":"published local member to cluster through raft","local-member-id":"b8c6c7563d17d844","local-member-attributes":"{Name:ha-090000 ClientURLs:[https://192.169.0.5:2379]}","request-path":"/0/members/b8c6c7563d17d844/attributes","cluster-id":"b73189effde9bc63","publish-timeout":"7s"}
	{"level":"info","ts":"2024-07-22T10:56:46.957175Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:56:46.957493Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-07-22T10:56:46.957527Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-07-22T10:56:46.957544Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-07-22T10:56:46.958972Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.169.0.5:2379"}
	{"level":"info","ts":"2024-07-22T10:56:46.960079Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"warn","ts":"2024-07-22T10:56:46.963514Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:38510","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-22T10:56:46.966109Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:38488","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-22T10:56:46.968533Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:38504","server-name":"","error":"EOF"}
	
	
	==> kernel <==
	 10:58:36 up 3 min,  0 users,  load average: 0.26, 0.11, 0.04
	Linux ha-090000 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0cf43afb12ba] <==
	I0722 10:54:34.307043       1 main.go:299] handling current node
	I0722 10:54:34.307109       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:54:34.307241       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:54:34.307497       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0722 10:54:34.307630       1 main.go:322] Node ha-090000-m03 has CIDR [10.244.2.0/24] 
	I0722 10:54:44.306852       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:54:44.306921       1 main.go:299] handling current node
	I0722 10:54:44.306943       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:54:44.306951       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:54:44.307088       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0722 10:54:44.307126       1 main.go:322] Node ha-090000-m03 has CIDR [10.244.2.0/24] 
	I0722 10:54:44.307182       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:54:44.307217       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:54:54.307325       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:54:54.307472       1 main.go:299] handling current node
	I0722 10:54:54.307635       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:54:54.307859       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:54:54.308266       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:54:54.308282       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:55:04.306079       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:55:04.306198       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:55:04.308236       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:55:04.308265       1 main.go:299] handling current node
	I0722 10:55:04.308274       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:55:04.308279       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [421317be1b45] <==
	I0722 10:57:55.604879       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:58:05.611941       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:58:05.611981       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:58:05.612249       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:58:05.612280       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:58:05.612399       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:58:05.612470       1 main.go:299] handling current node
	I0722 10:58:15.603323       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:58:15.603501       1 main.go:299] handling current node
	I0722 10:58:15.603555       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:58:15.603584       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:58:15.603727       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:58:15.603885       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:58:25.604372       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:58:25.604500       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:58:25.604954       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:58:25.605071       1 main.go:299] handling current node
	I0722 10:58:25.605090       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:58:25.605404       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:58:35.612589       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:58:35.612610       1 main.go:299] handling current node
	I0722 10:58:35.612620       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:58:35.612633       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:58:35.612692       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:58:35.612697       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4b11d2fc0144] <==
	I0722 10:56:02.936396       1 options.go:221] external host was not specified, using 192.169.0.5
	I0722 10:56:02.938177       1 server.go:148] Version: v1.30.3
	I0722 10:56:02.938401       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:56:04.233098       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0722 10:56:04.237477       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:56:04.240213       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0722 10:56:04.242473       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0722 10:56:04.246562       1 instance.go:299] Using reconciler: lease
	W0722 10:56:24.231990       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0722 10:56:24.232857       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0722 10:56:24.247519       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [945dd2cdb8d5] <==
	I0722 10:56:48.075828       1 naming_controller.go:291] Starting NamingConditionController
	I0722 10:56:48.075837       1 establishing_controller.go:76] Starting EstablishingController
	I0722 10:56:48.075846       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0722 10:56:48.075868       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0722 10:56:48.075874       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0722 10:56:48.207926       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0722 10:56:48.208391       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0722 10:56:48.211944       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0722 10:56:48.211985       1 shared_informer.go:320] Caches are synced for configmaps
	I0722 10:56:48.212771       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 10:56:48.213106       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 10:56:48.216923       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 10:56:48.217042       1 aggregator.go:165] initial CRD sync complete...
	I0722 10:56:48.217145       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 10:56:48.217189       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 10:56:48.217275       1 cache.go:39] Caches are synced for autoregister controller
	I0722 10:56:48.221104       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0722 10:56:48.239521       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 10:56:48.242525       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:56:48.242684       1 policy_source.go:224] refreshing policies
	E0722 10:56:48.283526       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 10:56:48.308533       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 10:56:49.071315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 10:57:23.501088       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 10:57:23.512462       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0469220f71ca] <==
	I0722 10:56:03.617244       1 serving.go:380] Generated self-signed cert in-memory
	I0722 10:56:04.853391       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0722 10:56:04.853427       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:56:04.854617       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0722 10:56:04.854850       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 10:56:04.854944       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0722 10:56:04.855121       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0722 10:56:25.255891       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [38dfb2ab5697] <==
	E0722 10:57:40.600876       1 gc_controller.go:153] "Failed to get node" err="node \"ha-090000-m03\" not found" logger="pod-garbage-collector-controller" node="ha-090000-m03"
	I0722 10:57:40.609438       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-090000-m03"
	I0722 10:57:40.623613       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-scheduler-ha-090000-m03"
	I0722 10:57:40.623649       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-090000-m03"
	I0722 10:57:40.636832       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-controller-manager-ha-090000-m03"
	I0722 10:57:40.637034       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-090000-m03"
	I0722 10:57:40.651093       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-090000-m03"
	I0722 10:57:40.651349       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-090000-m03"
	I0722 10:57:40.666222       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-090000-m03"
	I0722 10:57:40.666259       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lf6b4"
	I0722 10:57:40.681166       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lf6b4"
	I0722 10:57:40.681200       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-s5kg7"
	I0722 10:57:40.694662       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-s5kg7"
	I0722 10:57:40.694710       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-090000-m03"
	I0722 10:57:40.709068       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-090000-m03"
	I0722 10:57:40.764272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.008583ms"
	I0722 10:57:40.764797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.544µs"
	I0722 10:58:03.618800       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.381236ms"
	I0722 10:58:03.621760       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rfbkc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rfbkc\": the object has been modified; please apply your changes to the latest version and try again"
	I0722 10:58:03.621937       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"cbf53cbe-6c6b-4eb8-83fb-57cb4eb26b48", APIVersion:"v1", ResourceVersion:"259", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rfbkc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rfbkc": the object has been modified; please apply your changes to the latest version and try again
	I0722 10:58:03.622633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.074861ms"
	I0722 10:58:03.644685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.170528ms"
	I0722 10:58:03.645698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.7µs"
	I0722 10:58:03.645633       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"cbf53cbe-6c6b-4eb8-83fb-57cb4eb26b48", APIVersion:"v1", ResourceVersion:"259", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rfbkc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rfbkc": the object has been modified; please apply your changes to the latest version and try again
	I0722 10:58:03.645487       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rfbkc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rfbkc\": the object has been modified; please apply your changes to the latest version and try again"
	
	
	==> kube-proxy [391ccb3367a9] <==
	I0722 10:52:31.180149       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:52:31.201174       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0722 10:52:31.256621       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:52:31.256706       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:52:31.256721       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:52:31.259083       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:52:31.259774       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:52:31.259804       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:52:31.261784       1 config.go:192] "Starting service config controller"
	I0722 10:52:31.262305       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:52:31.261811       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:52:31.262481       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:52:31.264064       1 config.go:319] "Starting node config controller"
	I0722 10:52:31.264089       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:52:31.362703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:52:31.362744       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:52:31.364747       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9ea9aba3e1e9] <==
	I0722 10:57:24.900497       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:57:24.919847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0722 10:57:24.958255       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:57:24.958402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:57:24.958517       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:57:24.961727       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:57:24.962180       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:57:24.962261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:57:24.964654       1 config.go:192] "Starting service config controller"
	I0722 10:57:24.964872       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:57:24.964945       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:57:24.964997       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:57:24.966344       1 config.go:319] "Starting node config controller"
	I0722 10:57:24.967117       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:57:25.066129       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:57:25.066147       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:57:25.067691       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2c775554c943] <==
	W0722 10:51:45.397777       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 10:51:45.397808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 10:51:45.397839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 10:51:45.397871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 10:51:45.397899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:51:45.397947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 10:51:45.397980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 10:51:45.413802       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 10:51:45.422126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:51:45.422305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:51:45.422479       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 10:51:45.422614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:51:45.422760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 10:51:45.422889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 10:51:45.423057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 10:51:45.423192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:51:45.423231       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 10:51:45.423239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 10:51:45.423332       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0722 10:52:01.354860       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 10:54:41.357190       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5xsl6\": pod busybox-fc5497c4f-5xsl6 is already assigned to node \"ha-090000-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-5xsl6" node="ha-090000-m04"
	E0722 10:54:41.357320       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6d9ce972-1b0d-49c5-944b-6beca3ab4c50(default/busybox-fc5497c4f-5xsl6) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-5xsl6"
	E0722 10:54:41.357354       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5xsl6\": pod busybox-fc5497c4f-5xsl6 is already assigned to node \"ha-090000-m04\"" pod="default/busybox-fc5497c4f-5xsl6"
	I0722 10:54:41.357392       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-5xsl6" node="ha-090000-m04"
	E0722 10:55:06.233408       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cbe7a7a54b05] <==
	W0722 10:56:48.193852       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 10:56:48.193901       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 10:56:48.194013       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:48.194085       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:48.194169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 10:56:48.194206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 10:56:48.194280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:48.194336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:48.195930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:48.195970       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:48.196165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:56:48.196201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 10:56:48.196454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 10:56:48.196487       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 10:56:48.197536       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 10:56:48.197572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 10:56:48.197667       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:48.197700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:48.197762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:56:48.197795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 10:56:48.197869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 10:56:48.197900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 10:56:48.197990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 10:56:48.198023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0722 10:57:06.865178       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Jul 22 10:57:23 ha-090000 kubelet[1525]: E0722 10:57:23.000359    1525 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ha-090000\" not found"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: E0722 10:57:23.101480    1525 kubelet_node_status.go:462] "Error getting the current node from lister" err="node \"ha-090000\" not found"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.290261    1525 apiserver.go:52] "Watching apiserver"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.293317    1525 topology_manager.go:215] "Topology Admit Handler" podUID="ac1f1032-14ce-4c0c-b95b-a86bd4ef7810" podNamespace="kube-system" podName="coredns-7db6d8ff4d-mjc97"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.293498    1525 topology_manager.go:215] "Topology Admit Handler" podUID="439b0e4a-14b8-4556-9ae6-6a26590b6d5d" podNamespace="kube-system" podName="kindnet-mqxjd"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.293602    1525 topology_manager.go:215] "Topology Admit Handler" podUID="d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7" podNamespace="kube-system" podName="kube-proxy-xzpdq"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.293691    1525 topology_manager.go:215] "Topology Admit Handler" podUID="c1214845-bf0e-4808-9e11-faf18dd3cb3f" podNamespace="kube-system" podName="storage-provisioner"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.293861    1525 topology_manager.go:215] "Topology Admit Handler" podUID="cd051db1-dcbb-4fee-85d9-be13d1be38ec" podNamespace="kube-system" podName="coredns-7db6d8ff4d-lf5mv"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.296499    1525 topology_manager.go:215] "Topology Admit Handler" podUID="598660fc-04fc-474f-b06e-eec7ad0200cc" podNamespace="default" podName="busybox-fc5497c4f-2tcf2"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.338625    1525 desired_state_of_world_populator.go:157] "Finished populating initial desired state of world"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.403134    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7-lib-modules\") pod \"kube-proxy-xzpdq\" (UID: \"d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7\") " pod="kube-system/kube-proxy-xzpdq"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.403181    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/439b0e4a-14b8-4556-9ae6-6a26590b6d5d-cni-cfg\") pod \"kindnet-mqxjd\" (UID: \"439b0e4a-14b8-4556-9ae6-6a26590b6d5d\") " pod="kube-system/kindnet-mqxjd"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.403202    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/439b0e4a-14b8-4556-9ae6-6a26590b6d5d-xtables-lock\") pod \"kindnet-mqxjd\" (UID: \"439b0e4a-14b8-4556-9ae6-6a26590b6d5d\") " pod="kube-system/kindnet-mqxjd"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.403214    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/c1214845-bf0e-4808-9e11-faf18dd3cb3f-tmp\") pod \"storage-provisioner\" (UID: \"c1214845-bf0e-4808-9e11-faf18dd3cb3f\") " pod="kube-system/storage-provisioner"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.403232    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/439b0e4a-14b8-4556-9ae6-6a26590b6d5d-lib-modules\") pod \"kindnet-mqxjd\" (UID: \"439b0e4a-14b8-4556-9ae6-6a26590b6d5d\") " pod="kube-system/kindnet-mqxjd"
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.403242    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7-xtables-lock\") pod \"kube-proxy-xzpdq\" (UID: \"d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7\") " pod="kube-system/kube-proxy-xzpdq"
	Jul 22 10:57:55 ha-090000 kubelet[1525]: I0722 10:57:55.092868    1525 scope.go:117] "RemoveContainer" containerID="20b3e825f92688bc16eac5677dae4924c90dbb460ee6bd408c84b27166d3492d"
	Jul 22 10:57:55 ha-090000 kubelet[1525]: I0722 10:57:55.093131    1525 scope.go:117] "RemoveContainer" containerID="ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d"
	Jul 22 10:57:55 ha-090000 kubelet[1525]: E0722 10:57:55.093241    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c1214845-bf0e-4808-9e11-faf18dd3cb3f)\"" pod="kube-system/storage-provisioner" podUID="c1214845-bf0e-4808-9e11-faf18dd3cb3f"
	Jul 22 10:57:56 ha-090000 kubelet[1525]: E0722 10:57:56.362531    1525 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:57:56 ha-090000 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:57:56 ha-090000 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:57:56 ha-090000 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:57:56 ha-090000 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:58:10 ha-090000 kubelet[1525]: I0722 10:58:10.329825    1525 scope.go:117] "RemoveContainer" containerID="ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-090000 -n ha-090000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-090000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/RestartCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/RestartCluster (203.77s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddSecondaryNode (195.36s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-090000 --control-plane -v=7 --alsologtostderr
E0722 04:01:22.428575    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
ha_test.go:605: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-090000 --control-plane -v=7 --alsologtostderr: (3m11.243810538s)
ha_test.go:611: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr
ha_test.go:611: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr: exit status 2 (448.710995ms)

                                                
                                                
-- stdout --
	ha-090000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-090000-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-090000-m04
	type: Worker
	host: Running
	kubelet: Stopped
	
	ha-090000-m05
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:01:49.307117    4313 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:01:49.307442    4313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:01:49.307449    4313 out.go:304] Setting ErrFile to fd 2...
	I0722 04:01:49.307453    4313 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:01:49.308157    4313 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 04:01:49.308582    4313 out.go:298] Setting JSON to false
	I0722 04:01:49.308737    4313 mustload.go:65] Loading cluster: ha-090000
	I0722 04:01:49.308792    4313 notify.go:220] Checking for updates...
	I0722 04:01:49.309097    4313 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:01:49.309113    4313 status.go:255] checking status of ha-090000 ...
	I0722 04:01:49.309486    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.309531    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.318771    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52113
	I0722 04:01:49.319261    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.319678    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.319712    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.319952    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.320085    4313 main.go:141] libmachine: (ha-090000) Calling .GetState
	I0722 04:01:49.320168    4313 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:01:49.320251    4313 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3926
	I0722 04:01:49.321216    4313 status.go:330] ha-090000 host status = "Running" (err=<nil>)
	I0722 04:01:49.321236    4313 host.go:66] Checking if "ha-090000" exists ...
	I0722 04:01:49.321503    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.321529    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.329977    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52116
	I0722 04:01:49.330316    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.330722    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.330747    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.330958    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.331070    4313 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 04:01:49.331164    4313 host.go:66] Checking if "ha-090000" exists ...
	I0722 04:01:49.331421    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.331451    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.339766    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52118
	I0722 04:01:49.340077    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.340423    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.340439    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.340653    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.340776    4313 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 04:01:49.340913    4313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 04:01:49.340936    4313 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 04:01:49.341012    4313 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 04:01:49.341116    4313 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 04:01:49.341200    4313 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 04:01:49.341284    4313 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 04:01:49.376070    4313 ssh_runner.go:195] Run: systemctl --version
	I0722 04:01:49.382723    4313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:01:49.394536    4313 kubeconfig.go:125] found "ha-090000" server: "https://192.169.0.254:8443"
	I0722 04:01:49.394561    4313 api_server.go:166] Checking apiserver status ...
	I0722 04:01:49.394602    4313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:01:49.407268    4313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2327/cgroup
	W0722 04:01:49.416246    4313 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2327/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:01:49.416295    4313 ssh_runner.go:195] Run: ls
	I0722 04:01:49.419410    4313 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0722 04:01:49.422357    4313 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0722 04:01:49.422367    4313 status.go:422] ha-090000 apiserver status = Running (err=<nil>)
	I0722 04:01:49.422377    4313 status.go:257] ha-090000 status: &{Name:ha-090000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 04:01:49.422388    4313 status.go:255] checking status of ha-090000-m02 ...
	I0722 04:01:49.422651    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.422672    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.431452    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52122
	I0722 04:01:49.431794    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.432136    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.432163    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.432379    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.432486    4313 main.go:141] libmachine: (ha-090000-m02) Calling .GetState
	I0722 04:01:49.432574    4313 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:01:49.432662    4313 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid from json: 3958
	I0722 04:01:49.433645    4313 status.go:330] ha-090000-m02 host status = "Running" (err=<nil>)
	I0722 04:01:49.433654    4313 host.go:66] Checking if "ha-090000-m02" exists ...
	I0722 04:01:49.433892    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.433913    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.442596    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52124
	I0722 04:01:49.442938    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.443291    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.443314    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.443548    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.443667    4313 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 04:01:49.443774    4313 host.go:66] Checking if "ha-090000-m02" exists ...
	I0722 04:01:49.444046    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.444070    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.452836    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52126
	I0722 04:01:49.453177    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.453561    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.453578    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.453809    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.453927    4313 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 04:01:49.454055    4313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 04:01:49.454074    4313 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 04:01:49.454152    4313 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 04:01:49.454233    4313 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 04:01:49.454307    4313 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 04:01:49.454375    4313 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 04:01:49.482381    4313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:01:49.497821    4313 kubeconfig.go:125] found "ha-090000" server: "https://192.169.0.254:8443"
	I0722 04:01:49.497837    4313 api_server.go:166] Checking apiserver status ...
	I0722 04:01:49.497874    4313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:01:49.510129    4313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2076/cgroup
	W0722 04:01:49.517949    4313 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2076/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:01:49.518012    4313 ssh_runner.go:195] Run: ls
	I0722 04:01:49.521594    4313 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0722 04:01:49.524898    4313 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0722 04:01:49.524910    4313 status.go:422] ha-090000-m02 apiserver status = Running (err=<nil>)
	I0722 04:01:49.524918    4313 status.go:257] ha-090000-m02 status: &{Name:ha-090000-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 04:01:49.524932    4313 status.go:255] checking status of ha-090000-m04 ...
	I0722 04:01:49.525191    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.525212    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.534020    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52130
	I0722 04:01:49.534366    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.534686    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.534698    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.534916    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.535040    4313 main.go:141] libmachine: (ha-090000-m04) Calling .GetState
	I0722 04:01:49.535128    4313 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:01:49.535210    4313 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid from json: 3973
	I0722 04:01:49.536207    4313 status.go:330] ha-090000-m04 host status = "Running" (err=<nil>)
	I0722 04:01:49.536215    4313 host.go:66] Checking if "ha-090000-m04" exists ...
	I0722 04:01:49.536492    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.536540    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.546310    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52132
	I0722 04:01:49.546685    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.547085    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.547098    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.547312    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.547425    4313 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 04:01:49.547523    4313 host.go:66] Checking if "ha-090000-m04" exists ...
	I0722 04:01:49.547797    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.547819    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.556601    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52134
	I0722 04:01:49.556929    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.557268    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.557285    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.557521    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.557638    4313 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 04:01:49.557780    4313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 04:01:49.557799    4313 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 04:01:49.557883    4313 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 04:01:49.557955    4313 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 04:01:49.558036    4313 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 04:01:49.558106    4313 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 04:01:49.585283    4313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:01:49.596861    4313 status.go:257] ha-090000-m04 status: &{Name:ha-090000-m04 Host:Running Kubelet:Stopped APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0722 04:01:49.596877    4313 status.go:255] checking status of ha-090000-m05 ...
	I0722 04:01:49.597172    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.597201    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.605905    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52137
	I0722 04:01:49.606260    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.606592    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.606601    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.606828    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.606953    4313 main.go:141] libmachine: (ha-090000-m05) Calling .GetState
	I0722 04:01:49.607037    4313 main.go:141] libmachine: (ha-090000-m05) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:01:49.607133    4313 main.go:141] libmachine: (ha-090000-m05) DBG | hyperkit pid from json: 4016
	I0722 04:01:49.608132    4313 status.go:330] ha-090000-m05 host status = "Running" (err=<nil>)
	I0722 04:01:49.608141    4313 host.go:66] Checking if "ha-090000-m05" exists ...
	I0722 04:01:49.608407    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.608430    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.617126    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52139
	I0722 04:01:49.617462    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.617779    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.617789    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.617983    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.618091    4313 main.go:141] libmachine: (ha-090000-m05) Calling .GetIP
	I0722 04:01:49.618168    4313 host.go:66] Checking if "ha-090000-m05" exists ...
	I0722 04:01:49.618422    4313 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:01:49.618442    4313 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:01:49.627200    4313 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52141
	I0722 04:01:49.627526    4313 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:01:49.627849    4313 main.go:141] libmachine: Using API Version  1
	I0722 04:01:49.627857    4313 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:01:49.628067    4313 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:01:49.628183    4313 main.go:141] libmachine: (ha-090000-m05) Calling .DriverName
	I0722 04:01:49.628318    4313 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 04:01:49.628330    4313 main.go:141] libmachine: (ha-090000-m05) Calling .GetSSHHostname
	I0722 04:01:49.628414    4313 main.go:141] libmachine: (ha-090000-m05) Calling .GetSSHPort
	I0722 04:01:49.628505    4313 main.go:141] libmachine: (ha-090000-m05) Calling .GetSSHKeyPath
	I0722 04:01:49.628618    4313 main.go:141] libmachine: (ha-090000-m05) Calling .GetSSHUsername
	I0722 04:01:49.628698    4313 sshutil.go:53] new ssh client: &{IP:192.169.0.9 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m05/id_rsa Username:docker}
	I0722 04:01:49.657031    4313 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:01:49.669447    4313 kubeconfig.go:125] found "ha-090000" server: "https://192.169.0.254:8443"
	I0722 04:01:49.669461    4313 api_server.go:166] Checking apiserver status ...
	I0722 04:01:49.669501    4313 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:01:49.681788    4313 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2020/cgroup
	W0722 04:01:49.691121    4313 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2020/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:01:49.691169    4313 ssh_runner.go:195] Run: ls
	I0722 04:01:49.695167    4313 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0722 04:01:49.698223    4313 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0722 04:01:49.698235    4313 status.go:422] ha-090000-m05 apiserver status = Running (err=<nil>)
	I0722 04:01:49.698243    4313 status.go:257] ha-090000-m05 status: &{Name:ha-090000-m05 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:613: failed to run minikube status. args "out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p ha-090000 -n ha-090000
helpers_test.go:244: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiControlPlane/serial/AddSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-darwin-amd64 -p ha-090000 logs -n 25: (3.115320504s)
helpers_test.go:252: TestMultiControlPlane/serial/AddSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                                             Args                                                             |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m03 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n ha-090000-m04 sudo cat                                                                                      | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /home/docker/cp-test_ha-090000-m03_ha-090000-m04.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-090000 cp testdata/cp-test.txt                                                                                            | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04:/home/docker/cp-test.txt                                                                                       |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3050769313/001/cp-test_ha-090000-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| cp      | ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000:/home/docker/cp-test_ha-090000-m04_ha-090000.txt                                                                   |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n ha-090000 sudo cat                                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /home/docker/cp-test_ha-090000-m04_ha-090000.txt                                                                             |           |         |         |                     |                     |
	| cp      | ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m02:/home/docker/cp-test_ha-090000-m04_ha-090000-m02.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n ha-090000-m02 sudo cat                                                                                      | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /home/docker/cp-test_ha-090000-m04_ha-090000-m02.txt                                                                         |           |         |         |                     |                     |
	| cp      | ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m03:/home/docker/cp-test_ha-090000-m04_ha-090000-m03.txt                                                           |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | ha-090000-m04 sudo cat                                                                                                       |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                                                     |           |         |         |                     |                     |
	| ssh     | ha-090000 ssh -n ha-090000-m03 sudo cat                                                                                      | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:48 PDT |
	|         | /home/docker/cp-test_ha-090000-m04_ha-090000-m03.txt                                                                         |           |         |         |                     |                     |
	| node    | ha-090000 node stop m02 -v=7                                                                                                 | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:48 PDT | 22 Jul 24 03:49 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | ha-090000 node start m02 -v=7                                                                                                | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:49 PDT | 22 Jul 24 03:49 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-090000 -v=7                                                                                                       | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:49 PDT |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | -p ha-090000 -v=7                                                                                                            | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:49 PDT | 22 Jul 24 03:50 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-090000 --wait=true -v=7                                                                                                | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:50 PDT | 22 Jul 24 03:54 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| node    | list -p ha-090000                                                                                                            | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:54 PDT |                     |
	| node    | ha-090000 node delete m03 -v=7                                                                                               | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:54 PDT | 22 Jul 24 03:54 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| stop    | ha-090000 stop -v=7                                                                                                          | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:54 PDT | 22 Jul 24 03:55 PDT |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	| start   | -p ha-090000 --wait=true                                                                                                     | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:55 PDT |                     |
	|         | -v=7 --alsologtostderr                                                                                                       |           |         |         |                     |                     |
	|         | --driver=hyperkit                                                                                                            |           |         |         |                     |                     |
	| node    | add -p ha-090000                                                                                                             | ha-090000 | jenkins | v1.33.1 | 22 Jul 24 03:58 PDT | 22 Jul 24 04:01 PDT |
	|         | --control-plane -v=7                                                                                                         |           |         |         |                     |                     |
	|         | --alsologtostderr                                                                                                            |           |         |         |                     |                     |
	|---------|------------------------------------------------------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 03:55:14
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 03:55:14.001165    3911 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:55:14.001338    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:14.001344    3911 out.go:304] Setting ErrFile to fd 2...
	I0722 03:55:14.001348    3911 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:14.001524    3911 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 03:55:14.002913    3911 out.go:298] Setting JSON to false
	I0722 03:55:14.025317    3911 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":3283,"bootTime":1721642431,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 03:55:14.025414    3911 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:55:14.048097    3911 out.go:177] * [ha-090000] minikube v1.33.1 on Darwin 14.5
	I0722 03:55:14.089944    3911 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 03:55:14.089999    3911 notify.go:220] Checking for updates...
	I0722 03:55:14.132553    3911 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:14.153953    3911 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 03:55:14.177091    3911 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:55:14.197830    3911 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	I0722 03:55:14.219112    3911 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 03:55:14.240693    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:14.241352    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.241433    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:14.250957    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51948
	I0722 03:55:14.251322    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:14.251741    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:14.251758    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:14.252024    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:14.252166    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.252364    3911 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:55:14.252613    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.252647    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:14.260865    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51950
	I0722 03:55:14.261199    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:14.261501    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:14.261519    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:14.261723    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:14.261829    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.289710    3911 out.go:177] * Using the hyperkit driver based on existing profile
	I0722 03:55:14.331989    3911 start.go:297] selected driver: hyperkit
	I0722 03:55:14.332015    3911 start.go:901] validating driver "hyperkit" against &{Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false
ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirro
r: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:55:14.332262    3911 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 03:55:14.332464    3911 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:55:14.332656    3911 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19313-1111/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0722 03:55:14.342163    3911 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0722 03:55:14.345875    3911 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.345899    3911 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0722 03:55:14.348432    3911 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 03:55:14.348467    3911 cni.go:84] Creating CNI manager for ""
	I0722 03:55:14.348473    3911 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 03:55:14.348551    3911 start.go:340] cluster config:
	{Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.16
9.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false
kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: S
ocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:55:14.348667    3911 iso.go:125] acquiring lock: {Name:mk28fa3b914b659bb36b0449a0ad3ab1345dae32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:55:14.390735    3911 out.go:177] * Starting "ha-090000" primary control-plane node in "ha-090000" cluster
	I0722 03:55:14.412034    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:14.412101    3911 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 03:55:14.412134    3911 cache.go:56] Caching tarball of preloaded images
	I0722 03:55:14.412332    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 03:55:14.412374    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:55:14.412547    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:14.413322    3911 start.go:360] acquireMachinesLock for ha-090000: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:55:14.413444    3911 start.go:364] duration metric: took 104.878µs to acquireMachinesLock for "ha-090000"
	I0722 03:55:14.413466    3911 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:55:14.413480    3911 fix.go:54] fixHost starting: 
	I0722 03:55:14.413779    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:14.413805    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:14.422850    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51952
	I0722 03:55:14.423211    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:14.423607    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:14.423626    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:14.423868    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:14.424010    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.424163    3911 main.go:141] libmachine: (ha-090000) Calling .GetState
	I0722 03:55:14.424269    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:14.424340    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3743
	I0722 03:55:14.425373    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid 3743 missing from process table
	I0722 03:55:14.425407    3911 fix.go:112] recreateIfNeeded on ha-090000: state=Stopped err=<nil>
	I0722 03:55:14.425425    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	W0722 03:55:14.425550    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:55:14.467917    3911 out.go:177] * Restarting existing hyperkit VM for "ha-090000" ...
	I0722 03:55:14.490898    3911 main.go:141] libmachine: (ha-090000) Calling .Start
	I0722 03:55:14.491161    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:14.491206    3911 main.go:141] libmachine: (ha-090000) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid
	I0722 03:55:14.492929    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid 3743 missing from process table
	I0722 03:55:14.492946    3911 main.go:141] libmachine: (ha-090000) DBG | pid 3743 is in state "Stopped"
	I0722 03:55:14.492978    3911 main.go:141] libmachine: (ha-090000) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid...
	I0722 03:55:14.493148    3911 main.go:141] libmachine: (ha-090000) DBG | Using UUID 865eb55d-4879-4f09-8c93-9ca2b7f6f541
	I0722 03:55:14.657956    3911 main.go:141] libmachine: (ha-090000) DBG | Generated MAC de:e:68:47:cf:44
	I0722 03:55:14.657983    3911 main.go:141] libmachine: (ha-090000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000
	I0722 03:55:14.658095    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"865eb55d-4879-4f09-8c93-9ca2b7f6f541", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:14.658125    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"865eb55d-4879-4f09-8c93-9ca2b7f6f541", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003c2780)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:14.658167    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "865eb55d-4879-4f09-8c93-9ca2b7f6f541", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/ha-090000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd,earlyprintk=s
erial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"}
	I0722 03:55:14.658258    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 865eb55d-4879-4f09-8c93-9ca2b7f6f541 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/ha-090000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset
norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"
	I0722 03:55:14.658283    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 03:55:14.659556    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 DEBUG: hyperkit: Pid is 3926
	I0722 03:55:14.659971    3911 main.go:141] libmachine: (ha-090000) DBG | Attempt 0
	I0722 03:55:14.659983    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:14.660096    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3926
	I0722 03:55:14.661907    3911 main.go:141] libmachine: (ha-090000) DBG | Searching for de:e:68:47:cf:44 in /var/db/dhcpd_leases ...
	I0722 03:55:14.661965    3911 main.go:141] libmachine: (ha-090000) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0722 03:55:14.661986    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 03:55:14.662001    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8bc8}
	I0722 03:55:14.662012    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8b16}
	I0722 03:55:14.662031    3911 main.go:141] libmachine: (ha-090000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8aec}
	I0722 03:55:14.662047    3911 main.go:141] libmachine: (ha-090000) DBG | Found match: de:e:68:47:cf:44
	I0722 03:55:14.662058    3911 main.go:141] libmachine: (ha-090000) DBG | IP: 192.169.0.5
	I0722 03:55:14.662088    3911 main.go:141] libmachine: (ha-090000) Calling .GetConfigRaw
	I0722 03:55:14.662970    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:14.663190    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:14.663619    3911 machine.go:94] provisionDockerMachine start ...
	I0722 03:55:14.663631    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:14.663775    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:14.663892    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:14.663995    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:14.664107    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:14.664217    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:14.664369    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:14.664624    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:14.664637    3911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:55:14.668018    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 03:55:14.726271    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 03:55:14.726986    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:14.727016    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:14.727030    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:14.727041    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:14 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:15.102308    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 03:55:15.102323    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 03:55:15.217057    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:15.217079    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:15.217092    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:15.217103    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:15.217955    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 03:55:15.217966    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:15 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 03:55:20.486836    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 03:55:20.486863    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 03:55:20.486878    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 03:55:20.511003    3911 main.go:141] libmachine: (ha-090000) DBG | 2024/07/22 03:55:20 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 03:55:49.725974    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 03:55:49.725988    3911 main.go:141] libmachine: (ha-090000) Calling .GetMachineName
	I0722 03:55:49.726125    3911 buildroot.go:166] provisioning hostname "ha-090000"
	I0722 03:55:49.726138    3911 main.go:141] libmachine: (ha-090000) Calling .GetMachineName
	I0722 03:55:49.726243    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.726335    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:49.726420    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.726506    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.726616    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:49.726741    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:49.726890    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:49.726899    3911 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-090000 && echo "ha-090000" | sudo tee /etc/hostname
	I0722 03:55:49.789306    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-090000
	
	I0722 03:55:49.789328    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.789466    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:49.789581    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.789678    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.789776    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:49.789915    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:49.790061    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:49.790072    3911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-090000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-090000/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-090000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:55:49.849551    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:55:49.849576    3911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 03:55:49.849589    3911 buildroot.go:174] setting up certificates
	I0722 03:55:49.849598    3911 provision.go:84] configureAuth start
	I0722 03:55:49.849606    3911 main.go:141] libmachine: (ha-090000) Calling .GetMachineName
	I0722 03:55:49.849736    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:49.849829    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.849906    3911 provision.go:143] copyHostCerts
	I0722 03:55:49.849941    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:55:49.850010    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 03:55:49.850019    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:55:49.850190    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 03:55:49.850418    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:55:49.850458    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 03:55:49.850463    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:55:49.850553    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 03:55:49.850707    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:55:49.850746    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 03:55:49.850751    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:55:49.850838    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 03:55:49.850994    3911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.ha-090000 san=[127.0.0.1 192.169.0.5 ha-090000 localhost minikube]
	I0722 03:55:49.954745    3911 provision.go:177] copyRemoteCerts
	I0722 03:55:49.954797    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:55:49.954814    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:49.954945    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:49.955036    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:49.955138    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:49.955226    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:49.988017    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 03:55:49.988090    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:55:50.006955    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 03:55:50.007018    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0722 03:55:50.026488    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 03:55:50.026558    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 03:55:50.045921    3911 provision.go:87] duration metric: took 196.3146ms to configureAuth
	I0722 03:55:50.045933    3911 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:55:50.046087    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:50.046101    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:50.046225    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:50.046308    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:50.046401    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.046493    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.046569    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:50.046685    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:50.046803    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:50.046811    3911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:55:50.100376    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:55:50.100387    3911 buildroot.go:70] root file system type: tmpfs
	I0722 03:55:50.100457    3911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:55:50.100468    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:50.100595    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:50.100692    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.100789    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.100888    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:50.101021    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:50.101173    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:50.101220    3911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:55:50.162706    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:55:50.162761    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:50.162891    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:50.162997    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.163099    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:50.163182    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:50.163329    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:50.163465    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:50.163477    3911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:55:51.839255    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 03:55:51.839270    3911 machine.go:97] duration metric: took 37.176641879s to provisionDockerMachine
	I0722 03:55:51.839283    3911 start.go:293] postStartSetup for "ha-090000" (driver="hyperkit")
	I0722 03:55:51.839300    3911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:55:51.839314    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:51.839490    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:55:51.839510    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:51.839611    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:51.839703    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.839796    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:51.839928    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:51.873857    3911 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:55:51.877062    3911 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:55:51.877075    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 03:55:51.877182    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 03:55:51.877378    3911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 03:55:51.877384    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /etc/ssl/certs/16372.pem
	I0722 03:55:51.877594    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 03:55:51.885692    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:55:51.904673    3911 start.go:296] duration metric: took 65.382263ms for postStartSetup
	I0722 03:55:51.904692    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:51.904859    3911 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 03:55:51.904872    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:51.904961    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:51.905042    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.905118    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:51.905210    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:51.938400    3911 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0722 03:55:51.938461    3911 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0722 03:55:51.992039    3911 fix.go:56] duration metric: took 37.579572847s for fixHost
	I0722 03:55:51.992063    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:51.992208    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:51.992304    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.992398    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:51.992482    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:51.992602    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:51.992763    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.5 22 <nil> <nil>}
	I0722 03:55:51.992770    3911 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 03:55:52.046381    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645751.936433832
	
	I0722 03:55:52.046393    3911 fix.go:216] guest clock: 1721645751.936433832
	I0722 03:55:52.046398    3911 fix.go:229] Guest: 2024-07-22 03:55:51.936433832 -0700 PDT Remote: 2024-07-22 03:55:51.992052 -0700 PDT m=+38.026686282 (delta=-55.618168ms)
	I0722 03:55:52.046416    3911 fix.go:200] guest clock delta is within tolerance: -55.618168ms
	I0722 03:55:52.046421    3911 start.go:83] releasing machines lock for "ha-090000", held for 37.633981911s
	I0722 03:55:52.046442    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.046575    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:52.046677    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.046990    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.047122    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:55:52.047226    3911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:55:52.047248    3911 ssh_runner.go:195] Run: cat /version.json
	I0722 03:55:52.047259    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:52.047259    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:55:52.047380    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:52.047396    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:55:52.047483    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:52.047511    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:55:52.047561    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:52.047626    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:55:52.047654    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:52.047720    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:55:52.075429    3911 ssh_runner.go:195] Run: systemctl --version
	I0722 03:55:52.079894    3911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 03:55:52.124828    3911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:55:52.124898    3911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:55:52.137859    3911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 03:55:52.137870    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:55:52.137970    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:55:52.155379    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:55:52.164198    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:55:52.173115    3911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:55:52.173156    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:55:52.182074    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:55:52.190972    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:55:52.199765    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:55:52.208507    3911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:55:52.217591    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:55:52.226424    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:55:52.235243    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:55:52.244124    3911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:55:52.252099    3911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:55:52.259973    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:52.354629    3911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:55:52.373701    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:55:52.373781    3911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:55:52.386226    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:55:52.407006    3911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:55:52.422442    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:55:52.433467    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:55:52.445302    3911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 03:55:52.465665    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:55:52.477795    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:55:52.493683    3911 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:55:52.496631    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:55:52.503860    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:55:52.517344    3911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:55:52.615407    3911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:55:52.719878    3911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:55:52.719955    3911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:55:52.735170    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:52.840992    3911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:55:55.172776    3911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.331829238s)
	I0722 03:55:55.172846    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 03:55:55.183162    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:55:55.193307    3911 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 03:55:55.284550    3911 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 03:55:55.395161    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:55.503613    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 03:55:55.517310    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:55:55.528594    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:55.620227    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 03:55:55.685036    3911 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 03:55:55.685111    3911 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 03:55:55.689532    3911 start.go:563] Will wait 60s for crictl version
	I0722 03:55:55.689580    3911 ssh_runner.go:195] Run: which crictl
	I0722 03:55:55.692688    3911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 03:55:55.719714    3911 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 03:55:55.719788    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:55:55.737225    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:55:55.780302    3911 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 03:55:55.780349    3911 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:55:55.780734    3911 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0722 03:55:55.785388    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:55:55.796137    3911 kubeadm.go:883] updating cluster {Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.
3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:f
alse inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0722 03:55:55.796229    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:55.796288    3911 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 03:55:55.808589    3911 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0722 03:55:55.808605    3911 docker.go:615] Images already preloaded, skipping extraction
	I0722 03:55:55.808686    3911 ssh_runner.go:195] Run: docker images --format {{.Repository}}:{{.Tag}}
	I0722 03:55:55.823528    3911 docker.go:685] Got preloaded images: -- stdout --
	kindest/kindnetd:v20240719-e7903573
	registry.k8s.io/kube-apiserver:v1.30.3
	registry.k8s.io/kube-controller-manager:v1.30.3
	registry.k8s.io/kube-scheduler:v1.30.3
	registry.k8s.io/kube-proxy:v1.30.3
	ghcr.io/kube-vip/kube-vip:v0.8.0
	registry.k8s.io/etcd:3.5.12-0
	registry.k8s.io/coredns/coredns:v1.11.1
	registry.k8s.io/pause:3.9
	gcr.io/k8s-minikube/storage-provisioner:v5
	gcr.io/k8s-minikube/busybox:1.28
	
	-- /stdout --
	I0722 03:55:55.823552    3911 cache_images.go:84] Images are preloaded, skipping loading
	I0722 03:55:55.823561    3911 kubeadm.go:934] updating node { 192.169.0.5 8443 v1.30.3 docker true true} ...
	I0722 03:55:55.823650    3911 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-090000 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.5
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 03:55:55.823715    3911 ssh_runner.go:195] Run: docker info --format {{.CgroupDriver}}
	I0722 03:55:55.843770    3911 cni.go:84] Creating CNI manager for ""
	I0722 03:55:55.843782    3911 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0722 03:55:55.843795    3911 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0722 03:55:55.843811    3911 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.169.0.5 APIServerPort:8443 KubernetesVersion:v1.30.3 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-090000 NodeName:ha-090000 DNSDomain:cluster.local CRISocket:/var/run/cri-dockerd.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.169.0.5"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.169.0.5 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernetes/manif
ests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/cri-dockerd.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0722 03:55:55.843918    3911 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.169.0.5
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/cri-dockerd.sock
	  name: "ha-090000"
	  kubeletExtraArgs:
	    node-ip: 192.169.0.5
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.169.0.5"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.30.3
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/cri-dockerd.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0722 03:55:55.843949    3911 kube-vip.go:115] generating kube-vip config ...
	I0722 03:55:55.843997    3911 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 03:55:55.858984    3911 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 03:55:55.859051    3911 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 03:55:55.859099    3911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 03:55:55.871541    3911 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 03:55:55.871605    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0722 03:55:55.879901    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (307 bytes)
	I0722 03:55:55.893317    3911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 03:55:55.906860    3911 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2148 bytes)
	I0722 03:55:55.920583    3911 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0722 03:55:55.934115    3911 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0722 03:55:55.937202    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:55:55.947512    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:55:56.043601    3911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:55:56.058460    3911 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000 for IP: 192.169.0.5
	I0722 03:55:56.058473    3911 certs.go:194] generating shared ca certs ...
	I0722 03:55:56.058482    3911 certs.go:226] acquiring lock for ca certs: {Name:mk31b6ba3ba4e51acc59db740baf7c8ba8dd988b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.058655    3911 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key
	I0722 03:55:56.058735    3911 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key
	I0722 03:55:56.058744    3911 certs.go:256] generating profile certs ...
	I0722 03:55:56.058828    3911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key
	I0722 03:55:56.058850    3911 certs.go:363] generating signed profile cert for "minikube": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603
	I0722 03:55:56.058866    3911 crypto.go:68] Generating cert /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.169.0.5 192.169.0.6 192.169.0.254]
	I0722 03:55:56.176369    3911 crypto.go:156] Writing cert to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603 ...
	I0722 03:55:56.176387    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603: {Name:mk56ec66ac2a3d80a126aae24a23c208f41c56a5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.176780    3911 crypto.go:164] Writing key to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603 ...
	I0722 03:55:56.176790    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603: {Name:mk0da3ff1ed021cd0c62e370f79895aeed00bfd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.177042    3911 certs.go:381] copying /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt.9d35a603 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt
	I0722 03:55:56.177289    3911 certs.go:385] copying /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.9d35a603 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key
	I0722 03:55:56.177558    3911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key
	I0722 03:55:56.177573    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 03:55:56.177599    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 03:55:56.177621    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 03:55:56.177643    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 03:55:56.177663    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 03:55:56.177684    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 03:55:56.177705    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 03:55:56.177727    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 03:55:56.177832    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem (1338 bytes)
	W0722 03:55:56.177883    3911 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637_empty.pem, impossibly tiny 0 bytes
	I0722 03:55:56.177892    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 03:55:56.177935    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem (1078 bytes)
	I0722 03:55:56.177980    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem (1123 bytes)
	I0722 03:55:56.178009    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem (1675 bytes)
	I0722 03:55:56.178085    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:55:56.178123    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem -> /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.178148    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.178168    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.178610    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 03:55:56.201771    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 03:55:56.234700    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 03:55:56.277028    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 03:55:56.303799    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 03:55:56.355626    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 03:55:56.423367    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 03:55:56.460516    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 03:55:56.495805    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem --> /usr/share/ca-certificates/1637.pem (1338 bytes)
	I0722 03:55:56.523902    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /usr/share/ca-certificates/16372.pem (1708 bytes)
	I0722 03:55:56.561999    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 03:55:56.592542    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0722 03:55:56.609376    3911 ssh_runner.go:195] Run: openssl version
	I0722 03:55:56.613622    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1637.pem && ln -fs /usr/share/ca-certificates/1637.pem /etc/ssl/certs/1637.pem"
	I0722 03:55:56.622123    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.625637    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:38 /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.625671    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1637.pem
	I0722 03:55:56.629816    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1637.pem /etc/ssl/certs/51391683.0"
	I0722 03:55:56.638362    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16372.pem && ln -fs /usr/share/ca-certificates/16372.pem /etc/ssl/certs/16372.pem"
	I0722 03:55:56.646609    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.650063    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:38 /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.650097    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16372.pem
	I0722 03:55:56.654257    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16372.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 03:55:56.662670    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 03:55:56.671261    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.674720    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.674754    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:55:56.678972    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 03:55:56.687498    3911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 03:55:56.691047    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 03:55:56.695322    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 03:55:56.699702    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 03:55:56.704065    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 03:55:56.708401    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 03:55:56.712852    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 03:55:56.717112    3911 kubeadm.go:392] StartCluster: {Name:ha-090000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 C
lusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true} {Name:m04 IP:192.169.0.8 Port:0 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:fals
e inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:55:56.717233    3911 ssh_runner.go:195] Run: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format={{.ID}}
	I0722 03:55:56.730051    3911 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0722 03:55:56.737806    3911 kubeadm.go:408] found existing configuration files, will attempt cluster restart
	I0722 03:55:56.737821    3911 kubeadm.go:593] restartPrimaryControlPlane start ...
	I0722 03:55:56.737861    3911 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0722 03:55:56.745356    3911 kubeadm.go:130] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0722 03:55:56.745651    3911 kubeconfig.go:47] verify endpoint returned: get endpoint: "ha-090000" does not appear in /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:56.745730    3911 kubeconfig.go:62] /Users/jenkins/minikube-integration/19313-1111/kubeconfig needs updating (will repair): [kubeconfig missing "ha-090000" cluster setting kubeconfig missing "ha-090000" context setting]
	I0722 03:55:56.745922    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/kubeconfig: {Name:mkf2b240918cd66dabf425a67d7df0a0c9aa8c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.746572    3911 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:56.746765    3911 kapi.go:59] client config for ha-090000: &rest.Config{Host:"https://192.169.0.5:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)},
UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xc727ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0722 03:55:56.747076    3911 cert_rotation.go:137] Starting client certificate rotation controller
	I0722 03:55:56.747254    3911 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0722 03:55:56.754607    3911 kubeadm.go:630] The running cluster does not require reconfiguration: 192.169.0.5
	I0722 03:55:56.754620    3911 kubeadm.go:597] duration metric: took 16.795414ms to restartPrimaryControlPlane
	I0722 03:55:56.754625    3911 kubeadm.go:394] duration metric: took 37.520322ms to StartCluster
	I0722 03:55:56.754634    3911 settings.go:142] acquiring lock: {Name:mk61cf5b2a74edb35dda57ecbe8abc2ea6c58c7c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.754711    3911 settings.go:150] Updating kubeconfig:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:55:56.755134    3911 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/kubeconfig: {Name:mkf2b240918cd66dabf425a67d7df0a0c9aa8c39 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:55:56.755360    3911 start.go:233] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.169.0.5 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 03:55:56.755373    3911 start.go:241] waiting for startup goroutines ...
	I0722 03:55:56.755387    3911 addons.go:507] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volcano:false volumesnapshots:false yakd:false]
	I0722 03:55:56.755497    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:56.799163    3911 out.go:177] * Enabled addons: 
	I0722 03:55:56.820182    3911 addons.go:510] duration metric: took 64.792244ms for enable addons: enabled=[]
	I0722 03:55:56.820230    3911 start.go:246] waiting for cluster config update ...
	I0722 03:55:56.820244    3911 start.go:255] writing updated cluster config ...
	I0722 03:55:56.842189    3911 out.go:177] 
	I0722 03:55:56.863789    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:56.863918    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:56.886431    3911 out.go:177] * Starting "ha-090000-m02" control-plane node in "ha-090000" cluster
	I0722 03:55:56.928353    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:55:56.928403    3911 cache.go:56] Caching tarball of preloaded images
	I0722 03:55:56.928581    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 03:55:56.928604    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:55:56.928730    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:56.929636    3911 start.go:360] acquireMachinesLock for ha-090000-m02: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:55:56.929748    3911 start.go:364] duration metric: took 80.846µs to acquireMachinesLock for "ha-090000-m02"
	I0722 03:55:56.929773    3911 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:55:56.929782    3911 fix.go:54] fixHost starting: m02
	I0722 03:55:56.930190    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:56.930213    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:56.939208    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51974
	I0722 03:55:56.939548    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:56.939878    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:55:56.939889    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:56.940129    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:56.940269    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:55:56.940364    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetState
	I0722 03:55:56.940445    3911 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:56.940553    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid from json: 3753
	I0722 03:55:56.941410    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid 3753 missing from process table
	I0722 03:55:56.941430    3911 fix.go:112] recreateIfNeeded on ha-090000-m02: state=Stopped err=<nil>
	I0722 03:55:56.941439    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	W0722 03:55:56.941520    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:55:56.963252    3911 out.go:177] * Restarting existing hyperkit VM for "ha-090000-m02" ...
	I0722 03:55:56.984572    3911 main.go:141] libmachine: (ha-090000-m02) Calling .Start
	I0722 03:55:56.984884    3911 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:56.984972    3911 main.go:141] libmachine: (ha-090000-m02) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid
	I0722 03:55:56.986700    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid 3753 missing from process table
	I0722 03:55:56.986715    3911 main.go:141] libmachine: (ha-090000-m02) DBG | pid 3753 is in state "Stopped"
	I0722 03:55:56.986731    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid...
	I0722 03:55:56.987014    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Using UUID a238bb05-e07d-4298-98be-9d336c163b01
	I0722 03:55:57.014110    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Generated MAC 4e:65:fa:f9:26:3
	I0722 03:55:57.014143    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000
	I0722 03:55:57.014261    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a238bb05-e07d-4298-98be-9d336c163b01", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:57.014289    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"a238bb05-e07d-4298-98be-9d336c163b01", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc00037b350)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:55:57.014330    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "a238bb05-e07d-4298-98be-9d336c163b01", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/ha-090000-m02.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machine
s/ha-090000-m02/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"}
	I0722 03:55:57.014365    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U a238bb05-e07d-4298-98be-9d336c163b01 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/ha-090000-m02.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"
	I0722 03:55:57.014400    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 03:55:57.015680    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 DEBUG: hyperkit: Pid is 3958
	I0722 03:55:57.016180    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Attempt 0
	I0722 03:55:57.016197    3911 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:57.016259    3911 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid from json: 3958
	I0722 03:55:57.018025    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Searching for 4e:65:fa:f9:26:3 in /var/db/dhcpd_leases ...
	I0722 03:55:57.018041    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0722 03:55:57.018086    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 03:55:57.018095    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 03:55:57.018102    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8bc8}
	I0722 03:55:57.018112    3911 main.go:141] libmachine: (ha-090000-m02) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8b16}
	I0722 03:55:57.018118    3911 main.go:141] libmachine: (ha-090000-m02) DBG | Found match: 4e:65:fa:f9:26:3
	I0722 03:55:57.018122    3911 main.go:141] libmachine: (ha-090000-m02) DBG | IP: 192.169.0.6
	I0722 03:55:57.018178    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetConfigRaw
	I0722 03:55:57.018834    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:55:57.019009    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:55:57.019499    3911 machine.go:94] provisionDockerMachine start ...
	I0722 03:55:57.019509    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:55:57.019651    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:55:57.019770    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:55:57.019892    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:55:57.020010    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:55:57.020098    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:55:57.020264    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:55:57.020422    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:55:57.020435    3911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:55:57.023607    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 03:55:57.031862    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 03:55:57.032835    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:57.032848    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:57.032855    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:57.032861    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:57.411442    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 03:55:57.411461    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 03:55:57.526363    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:55:57.526382    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:55:57.526390    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:55:57.526396    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:55:57.527265    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 03:55:57.527278    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:55:57 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 03:56:02.785857    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 03:56:02.785940    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 03:56:02.785949    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 03:56:02.812798    3911 main.go:141] libmachine: (ha-090000-m02) DBG | 2024/07/22 03:56:02 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 03:56:32.075580    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 03:56:32.075594    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetMachineName
	I0722 03:56:32.075720    3911 buildroot.go:166] provisioning hostname "ha-090000-m02"
	I0722 03:56:32.075731    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetMachineName
	I0722 03:56:32.075826    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.075933    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.076015    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.076119    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.076212    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.076341    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.076492    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.076502    3911 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-090000-m02 && echo "ha-090000-m02" | sudo tee /etc/hostname
	I0722 03:56:32.136897    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-090000-m02
	
	I0722 03:56:32.136912    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.137046    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.137157    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.137250    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.137341    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.137474    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.137607    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.137618    3911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-090000-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-090000-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-090000-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:56:32.192449    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:56:32.192463    3911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 03:56:32.192472    3911 buildroot.go:174] setting up certificates
	I0722 03:56:32.192482    3911 provision.go:84] configureAuth start
	I0722 03:56:32.192492    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetMachineName
	I0722 03:56:32.192621    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:56:32.192721    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.192798    3911 provision.go:143] copyHostCerts
	I0722 03:56:32.192826    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:56:32.192874    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 03:56:32.192879    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:56:32.193015    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 03:56:32.193230    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:56:32.193264    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 03:56:32.193269    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:56:32.193346    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 03:56:32.193513    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:56:32.193541    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 03:56:32.193546    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:56:32.193618    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 03:56:32.193767    3911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.ha-090000-m02 san=[127.0.0.1 192.169.0.6 ha-090000-m02 localhost minikube]
	I0722 03:56:32.314909    3911 provision.go:177] copyRemoteCerts
	I0722 03:56:32.314954    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:56:32.314968    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.315107    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.315208    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.315309    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.315384    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:32.347809    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 03:56:32.347885    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0722 03:56:32.366931    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 03:56:32.366988    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:56:32.386030    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 03:56:32.386103    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 03:56:32.404971    3911 provision.go:87] duration metric: took 212.48697ms to configureAuth
	I0722 03:56:32.404983    3911 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:56:32.405138    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:32.405152    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:32.405288    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.405375    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.405462    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.405546    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.405633    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.405741    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.405866    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.405874    3911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:56:32.454313    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:56:32.454324    3911 buildroot.go:70] root file system type: tmpfs
	I0722 03:56:32.454404    3911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:56:32.454417    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.454548    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.454656    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.454765    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.454869    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.454989    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.455128    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.455173    3911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:56:32.513991    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:56:32.514007    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:32.514163    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:32.514257    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.514355    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:32.514458    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:32.514588    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:32.514721    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:32.514733    3911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:56:34.211339    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 03:56:34.211353    3911 machine.go:97] duration metric: took 37.192847433s to provisionDockerMachine
	I0722 03:56:34.211364    3911 start.go:293] postStartSetup for "ha-090000-m02" (driver="hyperkit")
	I0722 03:56:34.211371    3911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:56:34.211386    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.211563    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:56:34.211577    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.211687    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.211786    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.211882    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.211969    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:34.242978    3911 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:56:34.245962    3911 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:56:34.245971    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 03:56:34.246060    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 03:56:34.246200    3911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 03:56:34.246206    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /etc/ssl/certs/16372.pem
	I0722 03:56:34.246360    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 03:56:34.254372    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:56:34.273009    3911 start.go:296] duration metric: took 61.631077ms for postStartSetup
	I0722 03:56:34.273028    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.273172    3911 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 03:56:34.273182    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.273265    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.273351    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.273439    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.273519    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:34.305174    3911 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0722 03:56:34.305226    3911 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0722 03:56:34.339922    3911 fix.go:56] duration metric: took 37.411144035s for fixHost
	I0722 03:56:34.339947    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.340082    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.340179    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.340258    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.340343    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.340478    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:34.340622    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.6 22 <nil> <nil>}
	I0722 03:56:34.340630    3911 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 03:56:34.388578    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645794.572489059
	
	I0722 03:56:34.388591    3911 fix.go:216] guest clock: 1721645794.572489059
	I0722 03:56:34.388596    3911 fix.go:229] Guest: 2024-07-22 03:56:34.572489059 -0700 PDT Remote: 2024-07-22 03:56:34.339936 -0700 PDT m=+80.375710715 (delta=232.553059ms)
	I0722 03:56:34.388606    3911 fix.go:200] guest clock delta is within tolerance: 232.553059ms
	I0722 03:56:34.388609    3911 start.go:83] releasing machines lock for "ha-090000-m02", held for 37.459858552s
	I0722 03:56:34.388627    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.388762    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:56:34.409792    3911 out.go:177] * Found network options:
	I0722 03:56:34.430136    3911 out.go:177]   - NO_PROXY=192.169.0.5
	W0722 03:56:34.451143    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:56:34.451179    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.452017    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.452288    3911 main.go:141] libmachine: (ha-090000-m02) Calling .DriverName
	I0722 03:56:34.452418    3911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:56:34.452457    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	W0722 03:56:34.452511    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:56:34.452619    3911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 03:56:34.452639    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHHostname
	I0722 03:56:34.452667    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.452899    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHPort
	I0722 03:56:34.452939    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.453127    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHKeyPath
	I0722 03:56:34.453158    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.453309    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetSSHUsername
	I0722 03:56:34.453305    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	I0722 03:56:34.453445    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.6 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m02/id_rsa Username:docker}
	W0722 03:56:34.481920    3911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:56:34.481981    3911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:56:34.527590    3911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 03:56:34.527602    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:56:34.527664    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:56:34.542920    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:56:34.551387    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:56:34.559553    3911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:56:34.559598    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:56:34.567825    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:56:34.576145    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:56:34.584472    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:56:34.592914    3911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:56:34.601360    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:56:34.609666    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:56:34.618581    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:56:34.626849    3911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:56:34.634297    3911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:56:34.642011    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:34.733806    3911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:56:34.753393    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:56:34.753463    3911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:56:34.769228    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:56:34.781756    3911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:56:34.797930    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:56:34.808316    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:56:34.818407    3911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 03:56:34.839910    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:56:34.852187    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:56:34.867777    3911 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:56:34.870845    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:56:34.878342    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:56:34.891766    3911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:56:34.986612    3911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:56:35.092574    3911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:56:35.092596    3911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:56:35.106385    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:35.202045    3911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:56:37.547949    3911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (2.345948624s)
	I0722 03:56:37.548007    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.socket
	I0722 03:56:37.559709    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:56:37.570592    3911 ssh_runner.go:195] Run: sudo systemctl unmask cri-docker.socket
	I0722 03:56:37.669571    3911 ssh_runner.go:195] Run: sudo systemctl enable cri-docker.socket
	I0722 03:56:37.763201    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:37.875925    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.socket
	I0722 03:56:37.889982    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service cri-docker.service
	I0722 03:56:37.900245    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:38.003656    3911 ssh_runner.go:195] Run: sudo systemctl restart cri-docker.service
	I0722 03:56:38.067963    3911 start.go:542] Will wait 60s for socket path /var/run/cri-dockerd.sock
	I0722 03:56:38.068036    3911 ssh_runner.go:195] Run: stat /var/run/cri-dockerd.sock
	I0722 03:56:38.072622    3911 start.go:563] Will wait 60s for crictl version
	I0722 03:56:38.072673    3911 ssh_runner.go:195] Run: which crictl
	I0722 03:56:38.075745    3911 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0722 03:56:38.103382    3911 start.go:579] Version:  0.1.0
	RuntimeName:  docker
	RuntimeVersion:  27.0.3
	RuntimeApiVersion:  v1
	I0722 03:56:38.103467    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:56:38.119903    3911 ssh_runner.go:195] Run: docker version --format {{.Server.Version}}
	I0722 03:56:38.160816    3911 out.go:204] * Preparing Kubernetes v1.30.3 on Docker 27.0.3 ...
	I0722 03:56:38.182482    3911 out.go:177]   - env NO_PROXY=192.169.0.5
	I0722 03:56:38.203478    3911 main.go:141] libmachine: (ha-090000-m02) Calling .GetIP
	I0722 03:56:38.203850    3911 ssh_runner.go:195] Run: grep 192.169.0.1	host.minikube.internal$ /etc/hosts
	I0722 03:56:38.207987    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.169.0.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:56:38.217642    3911 mustload.go:65] Loading cluster: ha-090000
	I0722 03:56:38.217804    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:38.218020    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:56:38.218035    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:56:38.226637    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51996
	I0722 03:56:38.226983    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:56:38.227325    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:56:38.227343    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:56:38.227630    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:56:38.227748    3911 main.go:141] libmachine: (ha-090000) Calling .GetState
	I0722 03:56:38.227836    3911 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:38.227899    3911 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3926
	I0722 03:56:38.228835    3911 host.go:66] Checking if "ha-090000" exists ...
	I0722 03:56:38.229086    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:56:38.229101    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:56:38.237412    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51998
	I0722 03:56:38.237753    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:56:38.238100    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:56:38.238118    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:56:38.238328    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:56:38.238453    3911 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:56:38.238565    3911 certs.go:68] Setting up /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000 for IP: 192.169.0.6
	I0722 03:56:38.238571    3911 certs.go:194] generating shared ca certs ...
	I0722 03:56:38.238580    3911 certs.go:226] acquiring lock for ca certs: {Name:mk31b6ba3ba4e51acc59db740baf7c8ba8dd988b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:56:38.238710    3911 certs.go:235] skipping valid "minikubeCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key
	I0722 03:56:38.238765    3911 certs.go:235] skipping valid "proxyClientCA" ca cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key
	I0722 03:56:38.238773    3911 certs.go:256] generating profile certs ...
	I0722 03:56:38.238865    3911 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key
	I0722 03:56:38.238954    3911 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key.cd5997a2
	I0722 03:56:38.239013    3911 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key
	I0722 03:56:38.239026    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0722 03:56:38.239049    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0722 03:56:38.239069    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0722 03:56:38.239087    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0722 03:56:38.239104    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0722 03:56:38.239123    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0722 03:56:38.239143    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0722 03:56:38.239166    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0722 03:56:38.239250    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem (1338 bytes)
	W0722 03:56:38.239289    3911 certs.go:480] ignoring /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637_empty.pem, impossibly tiny 0 bytes
	I0722 03:56:38.239297    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem (1675 bytes)
	I0722 03:56:38.239330    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem (1078 bytes)
	I0722 03:56:38.239361    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem (1123 bytes)
	I0722 03:56:38.239392    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem (1675 bytes)
	I0722 03:56:38.239457    3911 certs.go:484] found cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:56:38.239492    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem -> /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.239513    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.239532    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.239558    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:56:38.239660    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:56:38.239755    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:56:38.239850    3911 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:56:38.239942    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:56:38.265993    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0722 03:56:38.269678    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0722 03:56:38.278304    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0722 03:56:38.281402    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1675 bytes)
	I0722 03:56:38.289616    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0722 03:56:38.292667    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0722 03:56:38.300512    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0722 03:56:38.303570    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0722 03:56:38.311600    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0722 03:56:38.314768    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0722 03:56:38.322792    3911 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0722 03:56:38.325989    3911 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0722 03:56:38.334090    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0722 03:56:38.354251    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1679 bytes)
	I0722 03:56:38.373942    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0722 03:56:38.393826    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1675 bytes)
	I0722 03:56:38.413300    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0722 03:56:38.433234    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0722 03:56:38.452691    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0722 03:56:38.472206    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0722 03:56:38.492624    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/1637.pem --> /usr/share/ca-certificates/1637.pem (1338 bytes)
	I0722 03:56:38.511779    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /usr/share/ca-certificates/16372.pem (1708 bytes)
	I0722 03:56:38.531604    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0722 03:56:38.550960    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0722 03:56:38.564536    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1675 bytes)
	I0722 03:56:38.577906    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0722 03:56:38.591620    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0722 03:56:38.605203    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0722 03:56:38.619039    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0722 03:56:38.633179    3911 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0722 03:56:38.646763    3911 ssh_runner.go:195] Run: openssl version
	I0722 03:56:38.650909    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/16372.pem && ln -fs /usr/share/ca-certificates/16372.pem /etc/ssl/certs/16372.pem"
	I0722 03:56:38.659202    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.662546    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Jul 22 10:38 /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.662579    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/16372.pem
	I0722 03:56:38.666667    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/16372.pem /etc/ssl/certs/3ec20f2e.0"
	I0722 03:56:38.675008    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0722 03:56:38.683335    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.686876    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Jul 22 10:29 /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.686923    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0722 03:56:38.691071    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0722 03:56:38.699373    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/1637.pem && ln -fs /usr/share/ca-certificates/1637.pem /etc/ssl/certs/1637.pem"
	I0722 03:56:38.707510    3911 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.710890    3911 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Jul 22 10:38 /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.710923    3911 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/1637.pem
	I0722 03:56:38.715062    3911 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/1637.pem /etc/ssl/certs/51391683.0"
	I0722 03:56:38.723255    3911 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0722 03:56:38.726701    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0722 03:56:38.730990    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0722 03:56:38.735283    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0722 03:56:38.739568    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0722 03:56:38.743725    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0722 03:56:38.747941    3911 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0722 03:56:38.752113    3911 kubeadm.go:934] updating node {m02 192.169.0.6 8443 v1.30.3 docker true true} ...
	I0722 03:56:38.752169    3911 kubeadm.go:946] kubelet [Unit]
	Wants=docker.socket
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.30.3/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-090000-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.169.0.6
	
	[Install]
	 config:
	{KubernetesVersion:v1.30.3 ClusterName:ha-090000 Namespace:default APIServerHAVIP:192.169.0.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0722 03:56:38.752183    3911 kube-vip.go:115] generating kube-vip config ...
	I0722 03:56:38.752213    3911 ssh_runner.go:195] Run: sudo sh -c "modprobe --all ip_vs ip_vs_rr ip_vs_wrr ip_vs_sh nf_conntrack"
	I0722 03:56:38.764297    3911 kube-vip.go:167] auto-enabling control-plane load-balancing in kube-vip
	I0722 03:56:38.764339    3911 kube-vip.go:137] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_nodename
	      valueFrom:
	        fieldRef:
	          fieldPath: spec.nodeName
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.169.0.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.8.0
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0722 03:56:38.764386    3911 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.30.3
	I0722 03:56:38.777566    3911 binaries.go:44] Found k8s binaries, skipping transfer
	I0722 03:56:38.777617    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0722 03:56:38.785844    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (311 bytes)
	I0722 03:56:38.799378    3911 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0722 03:56:38.812569    3911 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1440 bytes)
	I0722 03:56:38.826035    3911 ssh_runner.go:195] Run: grep 192.169.0.254	control-plane.minikube.internal$ /etc/hosts
	I0722 03:56:38.829004    3911 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.169.0.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0722 03:56:38.838894    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:38.934878    3911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:56:38.949889    3911 start.go:235] Will wait 6m0s for node &{Name:m02 IP:192.169.0.6 Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 03:56:38.950085    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:38.971273    3911 out.go:177] * Verifying Kubernetes components...
	I0722 03:56:38.991992    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:56:39.123554    3911 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0722 03:56:39.136167    3911 loader.go:395] Config loaded from file:  /Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:56:39.136377    3911 kapi.go:59] client config for ha-090000: &rest.Config{Host:"https://192.169.0.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.crt", KeyFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/client.key", CAFile:"/Users/jenkins/minikube-integration/19313-1111/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(nil)}
, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0xc727ba0), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0722 03:56:39.136421    3911 kubeadm.go:483] Overriding stale ClientConfig host https://192.169.0.254:8443 with https://192.169.0.5:8443
	I0722 03:56:39.136590    3911 node_ready.go:35] waiting up to 6m0s for node "ha-090000-m02" to be "Ready" ...
	I0722 03:56:39.136660    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:39.136665    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:39.136672    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:39.136677    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:40.137255    3911 round_trippers.go:574] Response Status:  in 1000 milliseconds
	I0722 03:56:40.137479    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:40.137503    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:40.137521    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:40.137534    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:47.940026    3911 round_trippers.go:574] Response Status: 200 OK in 7802 milliseconds
	I0722 03:56:47.940733    3911 node_ready.go:49] node "ha-090000-m02" has status "Ready":"True"
	I0722 03:56:47.940746    3911 node_ready.go:38] duration metric: took 8.804377648s for node "ha-090000-m02" to be "Ready" ...
	I0722 03:56:47.940753    3911 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:56:47.940808    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:47.940815    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:47.940823    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:47.940827    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.019911    3911 round_trippers.go:574] Response Status: 200 OK in 79 milliseconds
	I0722 03:56:48.026784    3911 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-lf5mv" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.026849    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-lf5mv
	I0722 03:56:48.026855    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.026862    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.026866    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.031605    3911 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 03:56:48.032135    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.032143    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.032150    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.032153    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.034575    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.034884    3911 pod_ready.go:92] pod "coredns-7db6d8ff4d-lf5mv" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.034894    3911 pod_ready.go:81] duration metric: took 8.095254ms for pod "coredns-7db6d8ff4d-lf5mv" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.034902    3911 pod_ready.go:78] waiting up to 6m0s for pod "coredns-7db6d8ff4d-mjc97" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.034940    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/coredns-7db6d8ff4d-mjc97
	I0722 03:56:48.034951    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.034959    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.034963    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.037811    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.038390    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.038397    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.038403    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.038412    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.042255    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:48.042713    3911 pod_ready.go:92] pod "coredns-7db6d8ff4d-mjc97" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.042723    3911 pod_ready.go:81] duration metric: took 7.815334ms for pod "coredns-7db6d8ff4d-mjc97" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.042730    3911 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.042769    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-090000
	I0722 03:56:48.042774    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.042780    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.042784    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.046998    3911 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 03:56:48.047505    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.047512    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.047517    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.047519    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.050594    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:48.051034    3911 pod_ready.go:92] pod "etcd-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.051045    3911 pod_ready.go:81] duration metric: took 8.309873ms for pod "etcd-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.051052    3911 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.051096    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-090000-m02
	I0722 03:56:48.051102    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.051108    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.051112    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.055364    3911 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0722 03:56:48.055818    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:48.055827    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.055833    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.055837    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.058858    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:48.059331    3911 pod_ready.go:92] pod "etcd-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.059342    3911 pod_ready.go:81] duration metric: took 8.283096ms for pod "etcd-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.059349    3911 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.059399    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/etcd-ha-090000-m03
	I0722 03:56:48.059405    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.059412    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.059415    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.069366    3911 round_trippers.go:574] Response Status: 200 OK in 9 milliseconds
	I0722 03:56:48.140952    3911 request.go:629] Waited for 71.140962ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:48.140996    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:48.141001    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.141007    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.141013    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.150505    3911 round_trippers.go:574] Response Status: 404 Not Found in 9 milliseconds
	I0722 03:56:48.150672    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "etcd-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:48.150684    3911 pod_ready.go:81] duration metric: took 91.332094ms for pod "etcd-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:48.150693    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "etcd-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:48.150707    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.341259    3911 request.go:629] Waited for 190.473586ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000
	I0722 03:56:48.341296    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000
	I0722 03:56:48.341301    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.341307    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.341311    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.346534    3911 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 03:56:48.541247    3911 request.go:629] Waited for 194.341501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.541301    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:48.541310    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.541317    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.541321    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.543864    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.544294    3911 pod_ready.go:92] pod "kube-apiserver-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.544304    3911 pod_ready.go:81] duration metric: took 393.600781ms for pod "kube-apiserver-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.544310    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.740936    3911 request.go:629] Waited for 196.590173ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m02
	I0722 03:56:48.741009    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m02
	I0722 03:56:48.741017    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.741025    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.741032    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.743601    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:48.941584    3911 request.go:629] Waited for 197.554429ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:48.941670    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:48.941676    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:48.941681    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:48.941685    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:48.943442    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:48.943712    3911 pod_ready.go:92] pod "kube-apiserver-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:48.943722    3911 pod_ready.go:81] duration metric: took 399.417249ms for pod "kube-apiserver-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:48.943728    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.142238    3911 request.go:629] Waited for 198.455178ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m03
	I0722 03:56:49.142276    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-090000-m03
	I0722 03:56:49.142283    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.142291    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.142297    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.144759    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:49.341711    3911 request.go:629] Waited for 196.420201ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:49.341743    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:49.341748    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.341754    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.341757    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.343407    3911 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0722 03:56:49.343465    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:49.343477    3911 pod_ready.go:81] duration metric: took 399.754899ms for pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:49.343485    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-apiserver-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:49.343492    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.540820    3911 request.go:629] Waited for 197.295627ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000
	I0722 03:56:49.540859    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000
	I0722 03:56:49.540864    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.540873    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.540889    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.542752    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:49.741810    3911 request.go:629] Waited for 198.496804ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:49.741941    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:49.741953    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.741965    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.741971    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.745200    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:49.746626    3911 pod_ready.go:92] pod "kube-controller-manager-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:49.746670    3911 pod_ready.go:81] duration metric: took 403.181202ms for pod "kube-controller-manager-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.746679    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:49.942498    3911 request.go:629] Waited for 195.70501ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m02
	I0722 03:56:49.942556    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m02
	I0722 03:56:49.942566    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:49.942576    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:49.942583    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:49.945821    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:50.141728    3911 request.go:629] Waited for 194.653258ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:50.141778    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:50.141788    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.141874    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.141884    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.144857    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:50.145401    3911 pod_ready.go:92] pod "kube-controller-manager-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:50.145413    3911 pod_ready.go:81] duration metric: took 398.731517ms for pod "kube-controller-manager-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.145421    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.342252    3911 request.go:629] Waited for 196.790992ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m03
	I0722 03:56:50.342380    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-090000-m03
	I0722 03:56:50.342391    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.342402    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.342409    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.345338    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:50.541942    3911 request.go:629] Waited for 196.02759ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:50.542016    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:50.542024    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.542030    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.542035    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.543861    3911 round_trippers.go:574] Response Status: 404 Not Found in 1 milliseconds
	I0722 03:56:50.543979    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:50.543991    3911 pod_ready.go:81] duration metric: took 398.575179ms for pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:50.543999    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-controller-manager-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:50.544007    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8f92w" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.741981    3911 request.go:629] Waited for 197.931605ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f92w
	I0722 03:56:50.742035    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8f92w
	I0722 03:56:50.742108    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.742123    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.742139    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.745292    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:50.941201    3911 request.go:629] Waited for 195.378005ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m04
	I0722 03:56:50.941242    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m04
	I0722 03:56:50.941250    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:50.941279    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:50.941285    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:50.943392    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:50.943959    3911 pod_ready.go:92] pod "kube-proxy-8f92w" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:50.943968    3911 pod_ready.go:81] duration metric: took 399.965093ms for pod "kube-proxy-8f92w" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:50.943975    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-8wl7h" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.141802    3911 request.go:629] Waited for 197.795735ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wl7h
	I0722 03:56:51.141881    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-8wl7h
	I0722 03:56:51.141889    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.141897    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.141901    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.144430    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:51.341886    3911 request.go:629] Waited for 196.964343ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:51.341949    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:51.342008    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.342021    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.342042    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.345071    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:51.345563    3911 pod_ready.go:92] pod "kube-proxy-8wl7h" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:51.345575    3911 pod_ready.go:81] duration metric: took 401.60562ms for pod "kube-proxy-8wl7h" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.345584    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-s5kg7" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.541992    3911 request.go:629] Waited for 196.373771ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5kg7
	I0722 03:56:51.542055    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-s5kg7
	I0722 03:56:51.542062    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.542069    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.542073    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.544061    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:51.741851    3911 request.go:629] Waited for 197.301001ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:51.741903    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:51.741920    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.741972    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.741981    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.744924    3911 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0722 03:56:51.745061    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-proxy-s5kg7" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:51.745083    3911 pod_ready.go:81] duration metric: took 399.503782ms for pod "kube-proxy-s5kg7" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:51.745093    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-proxy-s5kg7" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:51.745099    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xzpdq" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:51.942237    3911 request.go:629] Waited for 197.092533ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzpdq
	I0722 03:56:51.942331    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-proxy-xzpdq
	I0722 03:56:51.942339    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:51.942348    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:51.942352    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:51.944379    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:52.140792    3911 request.go:629] Waited for 195.988207ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.140891    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.140898    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.140905    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.140908    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.143865    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:52.144152    3911 pod_ready.go:92] pod "kube-proxy-xzpdq" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:52.144162    3911 pod_ready.go:81] duration metric: took 399.065856ms for pod "kube-proxy-xzpdq" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.144174    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.341088    3911 request.go:629] Waited for 196.884909ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000
	I0722 03:56:52.341120    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000
	I0722 03:56:52.341125    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.341131    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.341158    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.342922    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:52.541268    3911 request.go:629] Waited for 197.724279ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.541331    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000
	I0722 03:56:52.541336    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.541343    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.541348    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.543046    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:52.543447    3911 pod_ready.go:92] pod "kube-scheduler-ha-090000" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:52.543457    3911 pod_ready.go:81] duration metric: took 399.28772ms for pod "kube-scheduler-ha-090000" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.543466    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.741611    3911 request.go:629] Waited for 198.11239ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m02
	I0722 03:56:52.741678    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m02
	I0722 03:56:52.741684    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.741690    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.741694    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.743685    3911 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0722 03:56:52.941884    3911 request.go:629] Waited for 197.596709ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:52.941966    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m02
	I0722 03:56:52.941974    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:52.941983    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:52.941990    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:52.944672    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:52.944946    3911 pod_ready.go:92] pod "kube-scheduler-ha-090000-m02" in "kube-system" namespace has status "Ready":"True"
	I0722 03:56:52.944957    3911 pod_ready.go:81] duration metric: took 401.495544ms for pod "kube-scheduler-ha-090000-m02" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:52.944964    3911 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	I0722 03:56:53.140781    3911 request.go:629] Waited for 195.779713ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m03
	I0722 03:56:53.140822    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-090000-m03
	I0722 03:56:53.140828    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.140846    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.140857    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.143259    3911 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0722 03:56:53.340903    3911 request.go:629] Waited for 197.282616ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:53.341040    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes/ha-090000-m03
	I0722 03:56:53.341054    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.341066    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.341072    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.343900    3911 round_trippers.go:574] Response Status: 404 Not Found in 2 milliseconds
	I0722 03:56:53.344052    3911 pod_ready.go:97] node "ha-090000-m03" hosting pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:53.344080    3911 pod_ready.go:81] duration metric: took 399.121362ms for pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace to be "Ready" ...
	E0722 03:56:53.344087    3911 pod_ready.go:66] WaitExtra: waitPodCondition: node "ha-090000-m03" hosting pod "kube-scheduler-ha-090000-m03" in "kube-system" namespace is currently not "Ready" (skipping!): error getting node "ha-090000-m03": nodes "ha-090000-m03" not found
	I0722 03:56:53.344093    3911 pod_ready.go:38] duration metric: took 5.403478999s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0722 03:56:53.344113    3911 api_server.go:52] waiting for apiserver process to appear ...
	I0722 03:56:53.344169    3911 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 03:56:53.355872    3911 api_server.go:72] duration metric: took 14.406346458s to wait for apiserver process to appear ...
	I0722 03:56:53.355884    3911 api_server.go:88] waiting for apiserver healthz status ...
	I0722 03:56:53.355903    3911 api_server.go:253] Checking apiserver healthz at https://192.169.0.5:8443/healthz ...
	I0722 03:56:53.360168    3911 api_server.go:279] https://192.169.0.5:8443/healthz returned 200:
	ok
	I0722 03:56:53.360204    3911 round_trippers.go:463] GET https://192.169.0.5:8443/version
	I0722 03:56:53.360209    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.360215    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.360219    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.360847    3911 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0722 03:56:53.360928    3911 api_server.go:141] control plane version: v1.30.3
	I0722 03:56:53.360938    3911 api_server.go:131] duration metric: took 5.049309ms to wait for apiserver health ...
	I0722 03:56:53.360953    3911 system_pods.go:43] waiting for kube-system pods to appear ...
	I0722 03:56:53.540855    3911 request.go:629] Waited for 179.859471ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.540957    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.540968    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.540979    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.540985    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.546462    3911 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 03:56:53.551792    3911 system_pods.go:59] 26 kube-system pods found
	I0722 03:56:53.551807    3911 system_pods.go:61] "coredns-7db6d8ff4d-lf5mv" [cd051db1-dcbb-4fee-85d9-be13d1be38ec] Running
	I0722 03:56:53.551813    3911 system_pods.go:61] "coredns-7db6d8ff4d-mjc97" [ac1f1032-14ce-4c0c-b95b-a86bd4ef7810] Running
	I0722 03:56:53.551817    3911 system_pods.go:61] "etcd-ha-090000" [ec0787c7-a5cb-4375-b6c7-04e80160dbd9] Running
	I0722 03:56:53.551820    3911 system_pods.go:61] "etcd-ha-090000-m02" [70e6e1d6-208c-45b6-ad64-c10be5faedbb] Running
	I0722 03:56:53.551823    3911 system_pods.go:61] "etcd-ha-090000-m03" [ed74b70b-4483-4ac9-9db2-5c1507439fbf] Running
	I0722 03:56:53.551830    3911 system_pods.go:61] "kindnet-kqb2r" [58565238-777a-421f-a15d-38bd5daf596e] Running
	I0722 03:56:53.551834    3911 system_pods.go:61] "kindnet-lf6b4" [aadac04f-abbe-481b-accf-df0991b98748] Running
	I0722 03:56:53.551836    3911 system_pods.go:61] "kindnet-mqxjd" [439b0e4a-14b8-4556-9ae6-6a26590b6d5d] Running
	I0722 03:56:53.551839    3911 system_pods.go:61] "kindnet-xt575" [21e859c8-a102-4b48-ba9d-3b3902be8ba1] Running
	I0722 03:56:53.551842    3911 system_pods.go:61] "kube-apiserver-ha-090000" [c0377564-cef8-4807-8ab1-3fc6f2607591] Running
	I0722 03:56:53.551844    3911 system_pods.go:61] "kube-apiserver-ha-090000-m02" [87130092-7fea-4cf8-a1b4-b2b853d60334] Running
	I0722 03:56:53.551847    3911 system_pods.go:61] "kube-apiserver-ha-090000-m03" [056a2588-da71-4189-93cd-10a92f10d8d4] Running
	I0722 03:56:53.551850    3911 system_pods.go:61] "kube-controller-manager-ha-090000" [89cfb4c4-8d84-42f2-bae3-3962aada627b] Running
	I0722 03:56:53.551853    3911 system_pods.go:61] "kube-controller-manager-ha-090000-m02" [9173940b-a550-4f67-b37c-78e456b18a13] Running
	I0722 03:56:53.551855    3911 system_pods.go:61] "kube-controller-manager-ha-090000-m03" [75846dcb-f9d9-46c6-8eaa-857c3da39b9a] Running
	I0722 03:56:53.551858    3911 system_pods.go:61] "kube-proxy-8f92w" [10da7b52-073d-40c9-87ea-8484d68147e3] Running
	I0722 03:56:53.551861    3911 system_pods.go:61] "kube-proxy-8wl7h" [210fb608-afcf-4f5c-9b75-cc949c268854] Running
	I0722 03:56:53.551864    3911 system_pods.go:61] "kube-proxy-s5kg7" [8513335b-221c-4602-9aaa-b1e85b828bb4] Running
	I0722 03:56:53.551866    3911 system_pods.go:61] "kube-proxy-xzpdq" [d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7] Running
	I0722 03:56:53.551869    3911 system_pods.go:61] "kube-scheduler-ha-090000" [82031515-de24-4248-97ff-2bb892974db3] Running
	I0722 03:56:53.551872    3911 system_pods.go:61] "kube-scheduler-ha-090000-m02" [2f042e46-2b51-4b25-b94a-c22dde65c7fa] Running
	I0722 03:56:53.551874    3911 system_pods.go:61] "kube-scheduler-ha-090000-m03" [bf7cca91-4911-4f81-bde0-cbb089bd2fd2] Running
	I0722 03:56:53.551877    3911 system_pods.go:61] "kube-vip-ha-090000" [46ed0197-35a7-40cd-8480-0e66a09d4d69] Running
	I0722 03:56:53.551880    3911 system_pods.go:61] "kube-vip-ha-090000-m02" [b6025cfc-c08e-4981-b1b6-4f26ba5d5538] Running
	I0722 03:56:53.551882    3911 system_pods.go:61] "kube-vip-ha-090000-m03" [e7bc337b-5f22-4c55-86cb-1417b15343bd] Running
	I0722 03:56:53.551885    3911 system_pods.go:61] "storage-provisioner" [c1214845-bf0e-4808-9e11-faf18dd3cb3f] Running
	I0722 03:56:53.551889    3911 system_pods.go:74] duration metric: took 190.935916ms to wait for pod list to return data ...
	I0722 03:56:53.551895    3911 default_sa.go:34] waiting for default service account to be created ...
	I0722 03:56:53.741633    3911 request.go:629] Waited for 189.696516ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0722 03:56:53.741686    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/default/serviceaccounts
	I0722 03:56:53.741703    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.741714    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.741724    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.744889    3911 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0722 03:56:53.745045    3911 default_sa.go:45] found service account: "default"
	I0722 03:56:53.745059    3911 default_sa.go:55] duration metric: took 193.164449ms for default service account to be created ...
	I0722 03:56:53.745066    3911 system_pods.go:116] waiting for k8s-apps to be running ...
	I0722 03:56:53.941905    3911 request.go:629] Waited for 196.736167ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.941953    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/namespaces/kube-system/pods
	I0722 03:56:53.941965    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:53.941979    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:53.941986    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:53.947853    3911 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0722 03:56:53.953138    3911 system_pods.go:86] 26 kube-system pods found
	I0722 03:56:53.953150    3911 system_pods.go:89] "coredns-7db6d8ff4d-lf5mv" [cd051db1-dcbb-4fee-85d9-be13d1be38ec] Running
	I0722 03:56:53.953154    3911 system_pods.go:89] "coredns-7db6d8ff4d-mjc97" [ac1f1032-14ce-4c0c-b95b-a86bd4ef7810] Running
	I0722 03:56:53.953158    3911 system_pods.go:89] "etcd-ha-090000" [ec0787c7-a5cb-4375-b6c7-04e80160dbd9] Running
	I0722 03:56:53.953161    3911 system_pods.go:89] "etcd-ha-090000-m02" [70e6e1d6-208c-45b6-ad64-c10be5faedbb] Running
	I0722 03:56:53.953164    3911 system_pods.go:89] "etcd-ha-090000-m03" [ed74b70b-4483-4ac9-9db2-5c1507439fbf] Running
	I0722 03:56:53.953167    3911 system_pods.go:89] "kindnet-kqb2r" [58565238-777a-421f-a15d-38bd5daf596e] Running
	I0722 03:56:53.953171    3911 system_pods.go:89] "kindnet-lf6b4" [aadac04f-abbe-481b-accf-df0991b98748] Running
	I0722 03:56:53.953174    3911 system_pods.go:89] "kindnet-mqxjd" [439b0e4a-14b8-4556-9ae6-6a26590b6d5d] Running
	I0722 03:56:53.953176    3911 system_pods.go:89] "kindnet-xt575" [21e859c8-a102-4b48-ba9d-3b3902be8ba1] Running
	I0722 03:56:53.953179    3911 system_pods.go:89] "kube-apiserver-ha-090000" [c0377564-cef8-4807-8ab1-3fc6f2607591] Running
	I0722 03:56:53.953182    3911 system_pods.go:89] "kube-apiserver-ha-090000-m02" [87130092-7fea-4cf8-a1b4-b2b853d60334] Running
	I0722 03:56:53.953185    3911 system_pods.go:89] "kube-apiserver-ha-090000-m03" [056a2588-da71-4189-93cd-10a92f10d8d4] Running
	I0722 03:56:53.953189    3911 system_pods.go:89] "kube-controller-manager-ha-090000" [89cfb4c4-8d84-42f2-bae3-3962aada627b] Running
	I0722 03:56:53.953192    3911 system_pods.go:89] "kube-controller-manager-ha-090000-m02" [9173940b-a550-4f67-b37c-78e456b18a13] Running
	I0722 03:56:53.953195    3911 system_pods.go:89] "kube-controller-manager-ha-090000-m03" [75846dcb-f9d9-46c6-8eaa-857c3da39b9a] Running
	I0722 03:56:53.953199    3911 system_pods.go:89] "kube-proxy-8f92w" [10da7b52-073d-40c9-87ea-8484d68147e3] Running
	I0722 03:56:53.953203    3911 system_pods.go:89] "kube-proxy-8wl7h" [210fb608-afcf-4f5c-9b75-cc949c268854] Running
	I0722 03:56:53.953206    3911 system_pods.go:89] "kube-proxy-s5kg7" [8513335b-221c-4602-9aaa-b1e85b828bb4] Running
	I0722 03:56:53.953209    3911 system_pods.go:89] "kube-proxy-xzpdq" [d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7] Running
	I0722 03:56:53.953214    3911 system_pods.go:89] "kube-scheduler-ha-090000" [82031515-de24-4248-97ff-2bb892974db3] Running
	I0722 03:56:53.953219    3911 system_pods.go:89] "kube-scheduler-ha-090000-m02" [2f042e46-2b51-4b25-b94a-c22dde65c7fa] Running
	I0722 03:56:53.953222    3911 system_pods.go:89] "kube-scheduler-ha-090000-m03" [bf7cca91-4911-4f81-bde0-cbb089bd2fd2] Running
	I0722 03:56:53.953226    3911 system_pods.go:89] "kube-vip-ha-090000" [46ed0197-35a7-40cd-8480-0e66a09d4d69] Running
	I0722 03:56:53.953229    3911 system_pods.go:89] "kube-vip-ha-090000-m02" [b6025cfc-c08e-4981-b1b6-4f26ba5d5538] Running
	I0722 03:56:53.953232    3911 system_pods.go:89] "kube-vip-ha-090000-m03" [e7bc337b-5f22-4c55-86cb-1417b15343bd] Running
	I0722 03:56:53.953235    3911 system_pods.go:89] "storage-provisioner" [c1214845-bf0e-4808-9e11-faf18dd3cb3f] Running
	I0722 03:56:53.953241    3911 system_pods.go:126] duration metric: took 208.1764ms to wait for k8s-apps to be running ...
	I0722 03:56:53.953247    3911 system_svc.go:44] waiting for kubelet service to be running ....
	I0722 03:56:53.953298    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 03:56:53.964081    3911 system_svc.go:56] duration metric: took 10.830617ms WaitForService to wait for kubelet
	I0722 03:56:53.964094    3911 kubeadm.go:582] duration metric: took 15.014585328s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 03:56:53.964109    3911 node_conditions.go:102] verifying NodePressure condition ...
	I0722 03:56:54.141596    3911 request.go:629] Waited for 177.455634ms due to client-side throttling, not priority and fairness, request: GET:https://192.169.0.5:8443/api/v1/nodes
	I0722 03:56:54.141627    3911 round_trippers.go:463] GET https://192.169.0.5:8443/api/v1/nodes
	I0722 03:56:54.141632    3911 round_trippers.go:469] Request Headers:
	I0722 03:56:54.141645    3911 round_trippers.go:473]     Accept: application/json, */*
	I0722 03:56:54.141650    3911 round_trippers.go:473]     User-Agent: minikube-darwin-amd64/v0.0.0 (darwin/amd64) kubernetes/$Format
	I0722 03:56:54.156645    3911 round_trippers.go:574] Response Status: 200 OK in 14 milliseconds
	I0722 03:56:54.157279    3911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:56:54.157291    3911 node_conditions.go:123] node cpu capacity is 2
	I0722 03:56:54.157302    3911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:56:54.157305    3911 node_conditions.go:123] node cpu capacity is 2
	I0722 03:56:54.157309    3911 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0722 03:56:54.157315    3911 node_conditions.go:123] node cpu capacity is 2
	I0722 03:56:54.157319    3911 node_conditions.go:105] duration metric: took 193.210914ms to run NodePressure ...
	I0722 03:56:54.157327    3911 start.go:241] waiting for startup goroutines ...
	I0722 03:56:54.157344    3911 start.go:255] writing updated cluster config ...
	I0722 03:56:54.178247    3911 out.go:177] 
	I0722 03:56:54.215301    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:56:54.215427    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:56:54.237875    3911 out.go:177] * Starting "ha-090000-m04" worker node in "ha-090000" cluster
	I0722 03:56:54.313643    3911 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:56:54.313672    3911 cache.go:56] Caching tarball of preloaded images
	I0722 03:56:54.313891    3911 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 03:56:54.313909    3911 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:56:54.314031    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:56:54.314743    3911 start.go:360] acquireMachinesLock for ha-090000-m04: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 03:56:54.314865    3911 start.go:364] duration metric: took 97.548µs to acquireMachinesLock for "ha-090000-m04"
	I0722 03:56:54.314900    3911 start.go:96] Skipping create...Using existing machine configuration
	I0722 03:56:54.314909    3911 fix.go:54] fixHost starting: m04
	I0722 03:56:54.315362    3911 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:56:54.315392    3911 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:56:54.324846    3911 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52004
	I0722 03:56:54.325299    3911 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:56:54.325696    3911 main.go:141] libmachine: Using API Version  1
	I0722 03:56:54.325717    3911 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:56:54.325994    3911 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:56:54.326143    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:56:54.326258    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetState
	I0722 03:56:54.326348    3911 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:54.326459    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid from json: 3802
	I0722 03:56:54.327677    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid 3802 missing from process table
	I0722 03:56:54.327712    3911 fix.go:112] recreateIfNeeded on ha-090000-m04: state=Stopped err=<nil>
	I0722 03:56:54.327724    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	W0722 03:56:54.327832    3911 fix.go:138] unexpected machine state, will restart: <nil>
	I0722 03:56:54.347991    3911 out.go:177] * Restarting existing hyperkit VM for "ha-090000-m04" ...
	I0722 03:56:54.405790    3911 main.go:141] libmachine: (ha-090000-m04) Calling .Start
	I0722 03:56:54.406014    3911 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:54.406069    3911 main.go:141] libmachine: (ha-090000-m04) minikube might have been shutdown in an unclean way, the hyperkit pid file still exists: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid
	I0722 03:56:54.407060    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid 3802 missing from process table
	I0722 03:56:54.407069    3911 main.go:141] libmachine: (ha-090000-m04) DBG | pid 3802 is in state "Stopped"
	I0722 03:56:54.407087    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Removing stale pid file /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid...
	I0722 03:56:54.407246    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Using UUID f13599ad-3762-43bd-a5c6-6cfffb7afaca
	I0722 03:56:54.437806    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Generated MAC ca:7d:32:d9:5d:55
	I0722 03:56:54.437841    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000
	I0722 03:56:54.437986    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f13599ad-3762-43bd-a5c6-6cfffb7afaca", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:56:54.438025    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"f13599ad-3762-43bd-a5c6-6cfffb7afaca", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0003a6900)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:(*os.Process)(nil)}
	I0722 03:56:54.438089    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "f13599ad-3762-43bd-a5c6-6cfffb7afaca", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/ha-090000-m04.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machine
s/ha-090000-m04/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"}
	I0722 03:56:54.438135    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U f13599ad-3762-43bd-a5c6-6cfffb7afaca -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/ha-090000-m04.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/initrd,earlyprintk=serial loglevel=3 console=t
tyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=ha-090000"
	I0722 03:56:54.438159    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 03:56:54.439735    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 DEBUG: hyperkit: Pid is 3973
	I0722 03:56:54.440437    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Attempt 0
	I0722 03:56:54.440473    3911 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:56:54.440546    3911 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid from json: 3973
	I0722 03:56:54.443188    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Searching for ca:7d:32:d9:5d:55 in /var/db/dhcpd_leases ...
	I0722 03:56:54.443309    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Found 7 entries in /var/db/dhcpd_leases!
	I0722 03:56:54.443345    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8c45}
	I0722 03:56:54.443358    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 03:56:54.443395    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 03:56:54.443440    3911 main.go:141] libmachine: (ha-090000-m04) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8bc8}
	I0722 03:56:54.443458    3911 main.go:141] libmachine: (ha-090000-m04) DBG | Found match: ca:7d:32:d9:5d:55
	I0722 03:56:54.443482    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetConfigRaw
	I0722 03:56:54.443506    3911 main.go:141] libmachine: (ha-090000-m04) DBG | IP: 192.169.0.8
	I0722 03:56:54.444347    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:56:54.444653    3911 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/ha-090000/config.json ...
	I0722 03:56:54.445364    3911 machine.go:94] provisionDockerMachine start ...
	I0722 03:56:54.445380    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:56:54.445624    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:56:54.445766    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:56:54.445925    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:56:54.446085    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:56:54.446269    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:56:54.446478    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:56:54.446750    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:56:54.446762    3911 main.go:141] libmachine: About to run SSH command:
	hostname
	I0722 03:56:54.450021    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 03:56:54.474479    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 03:56:54.475620    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:56:54.475643    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:56:54.475657    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:56:54.475667    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:56:54.866202    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 03:56:54.866218    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 03:56:54.981166    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 03:56:54.981182    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 03:56:54.981189    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 03:56:54.981195    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 03:56:54.982030    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 03:56:54.982040    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:56:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 03:57:00.347122    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 03:57:00.347199    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 03:57:00.347212    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 03:57:00.370939    3911 main.go:141] libmachine: (ha-090000-m04) DBG | 2024/07/22 03:57:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 03:57:29.507146    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0722 03:57:29.507164    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetMachineName
	I0722 03:57:29.507326    3911 buildroot.go:166] provisioning hostname "ha-090000-m04"
	I0722 03:57:29.507337    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetMachineName
	I0722 03:57:29.507436    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.507532    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.507631    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.507730    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.507816    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.507942    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.508105    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.508119    3911 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-090000-m04 && echo "ha-090000-m04" | sudo tee /etc/hostname
	I0722 03:57:29.566504    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-090000-m04
	
	I0722 03:57:29.566520    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.566676    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.566768    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.566861    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.566958    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.567095    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.567238    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.567250    3911 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-090000-m04' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-090000-m04/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-090000-m04' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 03:57:29.622448    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 03:57:29.622463    3911 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 03:57:29.622472    3911 buildroot.go:174] setting up certificates
	I0722 03:57:29.622479    3911 provision.go:84] configureAuth start
	I0722 03:57:29.622486    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetMachineName
	I0722 03:57:29.622644    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:57:29.622751    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.622856    3911 provision.go:143] copyHostCerts
	I0722 03:57:29.622886    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:57:29.622945    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 03:57:29.622952    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 03:57:29.623163    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 03:57:29.623368    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:57:29.623410    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 03:57:29.623415    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 03:57:29.623495    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 03:57:29.623640    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem -> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:57:29.623679    3911 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 03:57:29.623684    3911 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 03:57:29.623770    3911 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 03:57:29.623918    3911 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.ha-090000-m04 san=[127.0.0.1 192.169.0.8 ha-090000-m04 localhost minikube]
	I0722 03:57:29.798481    3911 provision.go:177] copyRemoteCerts
	I0722 03:57:29.798536    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 03:57:29.798553    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.798720    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.798832    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.798934    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.799034    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:29.828994    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0722 03:57:29.829071    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 03:57:29.849145    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0722 03:57:29.849216    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0722 03:57:29.868964    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0722 03:57:29.869035    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 03:57:29.889770    3911 provision.go:87] duration metric: took 267.289907ms to configureAuth
	I0722 03:57:29.889784    3911 buildroot.go:189] setting minikube options for container-runtime
	I0722 03:57:29.889952    3911 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:57:29.889967    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:29.890101    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.890199    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.890275    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.890367    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.890452    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.890562    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.890690    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.890698    3911 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 03:57:29.941114    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 03:57:29.941126    3911 buildroot.go:70] root file system type: tmpfs
	I0722 03:57:29.941203    3911 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 03:57:29.941214    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.941336    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.941424    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.941505    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.941596    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:29.941717    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:29.941859    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:29.941908    3911 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %!s(MISSING) "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment="NO_PROXY=192.169.0.5"
	Environment="NO_PROXY=192.169.0.5,192.169.0.6"
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 03:57:29.999626    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	Environment=NO_PROXY=192.169.0.5
	Environment=NO_PROXY=192.169.0.5,192.169.0.6
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 03:57:29.999643    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:29.999785    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:29.999874    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:29.999968    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:30.000060    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:30.000202    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:30.000354    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:30.000367    3911 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 03:57:31.614623    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 03:57:31.614645    3911 machine.go:97] duration metric: took 37.170271356s to provisionDockerMachine
	I0722 03:57:31.614654    3911 start.go:293] postStartSetup for "ha-090000-m04" (driver="hyperkit")
	I0722 03:57:31.614661    3911 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 03:57:31.614672    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.614863    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 03:57:31.614878    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.614977    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.615074    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.615173    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.615258    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:31.646689    3911 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 03:57:31.649952    3911 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 03:57:31.649963    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 03:57:31.650063    3911 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 03:57:31.650246    3911 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 03:57:31.650252    3911 vm_assets.go:164] NewFileAsset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> /etc/ssl/certs/16372.pem
	I0722 03:57:31.650455    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 03:57:31.658413    3911 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 03:57:31.678576    3911 start.go:296] duration metric: took 63.915273ms for postStartSetup
	I0722 03:57:31.678597    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.678768    3911 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0722 03:57:31.678782    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.678870    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.678960    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.679037    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.679115    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:31.710161    3911 machine.go:198] restoring vm config from /var/lib/minikube/backup: [etc]
	I0722 03:57:31.710221    3911 ssh_runner.go:195] Run: sudo rsync --archive --update /var/lib/minikube/backup/etc /
	I0722 03:57:31.764084    3911 fix.go:56] duration metric: took 37.450180093s for fixHost
	I0722 03:57:31.764110    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.764259    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.764351    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.764456    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.764557    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.764680    3911 main.go:141] libmachine: Using SSH client type: native
	I0722 03:57:31.764822    3911 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xb2830c0] 0xb285e20 <nil>  [] 0s} 192.169.0.8 22 <nil> <nil>}
	I0722 03:57:31.764829    3911 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0722 03:57:31.816488    3911 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721645852.004461870
	
	I0722 03:57:31.816502    3911 fix.go:216] guest clock: 1721645852.004461870
	I0722 03:57:31.816508    3911 fix.go:229] Guest: 2024-07-22 03:57:32.00446187 -0700 PDT Remote: 2024-07-22 03:57:31.764099 -0700 PDT m=+137.801419594 (delta=240.36287ms)
	I0722 03:57:31.816522    3911 fix.go:200] guest clock delta is within tolerance: 240.36287ms
	I0722 03:57:31.816527    3911 start.go:83] releasing machines lock for "ha-090000-m04", held for 37.50265184s
	I0722 03:57:31.816545    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.816680    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:57:31.839252    3911 out.go:177] * Found network options:
	I0722 03:57:31.860719    3911 out.go:177]   - NO_PROXY=192.169.0.5,192.169.0.6
	W0722 03:57:31.881811    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 03:57:31.881829    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:57:31.881843    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.882321    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.882463    3911 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:57:31.882549    3911 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 03:57:31.882589    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	W0722 03:57:31.882613    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	W0722 03:57:31.882631    3911 proxy.go:119] fail to check proxy env: Error ip not in block
	I0722 03:57:31.882716    3911 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0722 03:57:31.882718    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.882733    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:57:31.882836    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.882856    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:57:31.882964    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:57:31.883010    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.883091    3911 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:57:31.883141    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:57:31.883196    3911 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	W0722 03:57:31.910458    3911 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 03:57:31.910515    3911 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 03:57:31.960457    3911 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 03:57:31.960475    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:57:31.960567    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:57:31.976097    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 03:57:31.984637    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 03:57:31.992923    3911 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 03:57:31.992964    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 03:57:32.001492    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:57:32.009758    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 03:57:32.018152    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 03:57:32.026574    3911 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 03:57:32.034947    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 03:57:32.043182    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 03:57:32.051485    3911 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 03:57:32.059820    3911 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 03:57:32.067251    3911 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 03:57:32.074803    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:57:32.169893    3911 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 03:57:32.188393    3911 start.go:495] detecting cgroup driver to use...
	I0722 03:57:32.188465    3911 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 03:57:32.206602    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:57:32.223241    3911 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 03:57:32.241086    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 03:57:32.252378    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:57:32.263494    3911 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 03:57:32.285713    3911 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 03:57:32.296269    3911 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 03:57:32.311089    3911 ssh_runner.go:195] Run: which cri-dockerd
	I0722 03:57:32.314143    3911 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 03:57:32.321424    3911 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 03:57:32.335207    3911 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 03:57:32.429597    3911 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 03:57:32.542464    3911 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 03:57:32.542490    3911 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 03:57:32.557136    3911 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 03:57:32.660326    3911 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 03:58:33.699453    3911 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.040752064s)
	I0722 03:58:33.699525    3911 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0722 03:58:33.734950    3911 out.go:177] 
	W0722 03:58:33.756536    3911 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 10:57:29 ha-090000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.446112727Z" level=info msg="Starting up"
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.446594219Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 10:57:29 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:29.447194660Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=516
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.462050990Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476816092Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476858837Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476899215Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.476909407Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477031508Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477068105Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477176376Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477210709Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477222939Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477230881Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477351816Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.477553357Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479128485Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479167134Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479271300Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479304705Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479417021Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.479458809Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481448117Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481494900Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481508142Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481517623Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481527464Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481569984Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481744950Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481852966Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481872403Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481907193Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481919076Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481928860Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481936657Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481955520Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481967273Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481975440Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481983423Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.481991104Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482004822Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482014286Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482022158Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482030329Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482040470Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482053851Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482064290Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482072410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482080983Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482093264Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482100888Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482108346Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482115856Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482130159Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482146190Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482154580Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482161596Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482209554Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482243396Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482253257Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482261382Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482267623Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482276094Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482285841Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482429840Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482484213Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482510048Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 10:57:29 ha-090000-m04 dockerd[516]: time="2024-07-22T10:57:29.482541660Z" level=info msg="containerd successfully booted in 0.021090s"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.467405362Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.479322696Z" level=info msg="Loading containers: start."
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.599220957Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 10:57:30 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:30.665815288Z" level=info msg="Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.771955379Z" level=warning msg="error locating sandbox id 023e4273edcd40723038879300e7321a9aec3901cb772dbfe3c38850836b1315: sandbox 023e4273edcd40723038879300e7321a9aec3901cb772dbfe3c38850836b1315 not found"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.772061725Z" level=info msg="Loading containers: done."
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.779357823Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.779511676Z" level=info msg="Daemon has completed initialization"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.801250223Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 10:57:31 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:31.801353911Z" level=info msg="API listen on [::]:2376"
	Jul 22 10:57:31 ha-090000-m04 systemd[1]: Started Docker Application Container Engine.
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.860896719Z" level=info msg="Processing signal 'terminated'"
	Jul 22 10:57:32 ha-090000-m04 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862255865Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862561859Z" level=info msg="Daemon shutdown complete"
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862690583Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 10:57:32 ha-090000-m04 dockerd[509]: time="2024-07-22T10:57:32.862732129Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 10:57:33 ha-090000-m04 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 10:57:33 ha-090000-m04 dockerd[1100]: time="2024-07-22T10:57:33.897261523Z" level=info msg="Starting up"
	Jul 22 10:58:33 ha-090000-m04 dockerd[1100]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 10:58:33 ha-090000-m04 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0722 03:58:33.756659    3911 out.go:239] * 
	W0722 03:58:33.757887    3911 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 03:58:33.836490    3911 out.go:177] 
	
	
	==> Docker <==
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.322254347Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.322411712Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.322506665Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.324060847Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 cri-dockerd[1362]: time="2024-07-22T10:57:24Z" level=info msg="Will attempt to re-write config file /var/lib/docker/containers/7fa5abbbfcc70888391d1fe46cf13ea2dd225349b0b899c6f8e60fd6b585bd3a/resolv.conf as [nameserver 10.96.0.10 search default.svc.cluster.local svc.cluster.local cluster.local options ndots:5]"
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.381899070Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.382048336Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.382062675Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.382185963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.433062434Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.433701381Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.433819122Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.434274603Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.614154636Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.614280634Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.614291998Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:24 ha-090000 dockerd[1114]: time="2024-07-22T10:57:24.614661459Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:57:54 ha-090000 dockerd[1108]: time="2024-07-22T10:57:54.856709477Z" level=info msg="ignoring event" container=ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d module=libcontainerd namespace=moby topic=/tasks/delete type="*events.TaskDelete"
	Jul 22 10:57:54 ha-090000 dockerd[1114]: time="2024-07-22T10:57:54.856997797Z" level=info msg="shim disconnected" id=ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d namespace=moby
	Jul 22 10:57:54 ha-090000 dockerd[1114]: time="2024-07-22T10:57:54.857029303Z" level=warning msg="cleaning up after shim disconnected" id=ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d namespace=moby
	Jul 22 10:57:54 ha-090000 dockerd[1114]: time="2024-07-22T10:57:54.857035486Z" level=info msg="cleaning up dead shim" namespace=moby
	Jul 22 10:58:10 ha-090000 dockerd[1114]: time="2024-07-22T10:58:10.369414842Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1
	Jul 22 10:58:10 ha-090000 dockerd[1114]: time="2024-07-22T10:58:10.369475705Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1
	Jul 22 10:58:10 ha-090000 dockerd[1114]: time="2024-07-22T10:58:10.369515025Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	Jul 22 10:58:10 ha-090000 dockerd[1114]: time="2024-07-22T10:58:10.369868470Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	3c38b6dfe3f08       6e38f40d628db       3 minutes ago       Running             storage-provisioner       3                   63b37a4b34936       storage-provisioner
	dbee947401e9e       8c811b4aec35f       4 minutes ago       Running             busybox                   2                   7fa5abbbfcc70       busybox-fc5497c4f-2tcf2
	421317be1b454       6f1d07c71fa0f       4 minutes ago       Running             kindnet-cni               2                   f70ae8f7b153f       kindnet-mqxjd
	22d788aa28349       cbb01a7bd410d       4 minutes ago       Running             coredns                   2                   4e646db94a0f3       coredns-7db6d8ff4d-lf5mv
	ea06caf73a7d0       6e38f40d628db       4 minutes ago       Exited              storage-provisioner       2                   63b37a4b34936       storage-provisioner
	6a1a698341695       cbb01a7bd410d       4 minutes ago       Running             coredns                   2                   af9872ab6752e       coredns-7db6d8ff4d-mjc97
	9ea9aba3e1e98       55bb025d2cfa5       4 minutes ago       Running             kube-proxy                2                   372499e41b533       kube-proxy-xzpdq
	38dfb2ab5697d       76932a3b37d7e       5 minutes ago       Running             kube-controller-manager   4                   696d1720743f7       kube-controller-manager-ha-090000
	945dd2cdb8d5e       1f6d574d502f3       5 minutes ago       Running             kube-apiserver            4                   f90e22d71e804       kube-apiserver-ha-090000
	d4bee2dc89b59       38af8ddebf499       5 minutes ago       Running             kube-vip                  1                   ed980c36ff3a0       kube-vip-ha-090000
	cbe7a7a54b053       3edc18e7b7672       5 minutes ago       Running             kube-scheduler            2                   060bad469022e       kube-scheduler-ha-090000
	288b4db4b4674       3861cfcd7c04c       5 minutes ago       Running             etcd                      2                   13882f0cb79d3       etcd-ha-090000
	0469220f71ca8       76932a3b37d7e       5 minutes ago       Exited              kube-controller-manager   3                   696d1720743f7       kube-controller-manager-ha-090000
	4b11d2fc0144c       1f6d574d502f3       5 minutes ago       Exited              kube-apiserver            3                   f90e22d71e804       kube-apiserver-ha-090000
	55fbc8e5d31b7       cbb01a7bd410d       8 minutes ago       Exited              coredns                   1                   ee6d0b35bdb3e       coredns-7db6d8ff4d-lf5mv
	1138d893c2d9d       cbb01a7bd410d       9 minutes ago       Exited              coredns                   1                   b7d38b6fa5afe       coredns-7db6d8ff4d-mjc97
	0cf43afb12ba9       6f1d07c71fa0f       9 minutes ago       Exited              kindnet-cni               1                   c2c5f6c134990       kindnet-mqxjd
	391ccb3367a92       55bb025d2cfa5       9 minutes ago       Exited              kube-proxy                1                   a7ddfdc244624       kube-proxy-xzpdq
	c354917eb9a7f       8c811b4aec35f       9 minutes ago       Exited              busybox                   1                   4b6299052dfcb       busybox-fc5497c4f-2tcf2
	b156ed53a712c       38af8ddebf499       10 minutes ago      Exited              kube-vip                  0                   403e3036bbfc3       kube-vip-ha-090000
	13f15d0cc8b35       3861cfcd7c04c       10 minutes ago      Exited              etcd                      1                   d23a99af3047f       etcd-ha-090000
	2c775554c943e       3edc18e7b7672       10 minutes ago      Exited              kube-scheduler            1                   d552ca73d0455       kube-scheduler-ha-090000
	
	
	==> coredns [1138d893c2d9] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:33736 - 25278 "HINFO IN 2232067124097066746.5321966554492967552. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.017294568s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [22d788aa2834] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:60421 - 52053 "HINFO IN 2117351152882643557.306907224904004981. udp 56 false 512" NXDOMAIN qr,rd,ra 131 0.0117126s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1646561162]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.636) (total time: 30001ms):
	Trace[1646561162]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:57:54.637)
	Trace[1646561162]: [30.001499419s] [30.001499419s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[204250251]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.637) (total time: 30003ms):
	Trace[204250251]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:57:54.638)
	Trace[204250251]: [30.003162507s] [30.003162507s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[1668393]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.637) (total time: 30003ms):
	Trace[1668393]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:57:54.638)
	Trace[1668393]: [30.003053761s] [30.003053761s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> coredns [55fbc8e5d31b] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:50575 - 59902 "HINFO IN 3988656002558365066.2402106395491727482. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.01065485s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [6a1a69834169] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 257e111468ef6f1e36f10df061303186c353cd0e51aed8f50f4e4fd21cec02687aef97084fe1f82262f5cee88179d311670a6ae21ae185759728216fc264125f
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:55362 - 47981 "HINFO IN 3732672677383048017.8374754956493277366. udp 57 false 512" NXDOMAIN qr,rd,ra 132 0.011991665s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[975084044]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.636) (total time: 30002ms):
	Trace[975084044]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30001ms (10:57:54.637)
	Trace[975084044]: [30.002362908s] [30.002362908s] END
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[901565610]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.635) (total time: 30004ms):
	Trace[901565610]: ---"Objects listed" error:Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30003ms (10:57:54.638)
	Trace[901565610]: [30.004123041s] [30.004123041s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[INFO] plugin/kubernetes: Trace[882114494]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231 (22-Jul-2024 10:57:24.636) (total time: 30003ms):
	Trace[882114494]: ---"Objects listed" error:Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout 30000ms (10:57:54.637)
	Trace[882114494]: [30.003053378s] [30.003053378s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.27.4/tools/cache/reflector.go:231: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://10.96.0.1:443/api/v1/services?limit=500&resourceVersion=0": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> describe nodes <==
	Name:               ha-090000
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-090000
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-090000
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_07_22T03_43_11_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:43:07 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-090000
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:01:48 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:56:52 +0000   Mon, 22 Jul 2024 10:43:07 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:56:52 +0000   Mon, 22 Jul 2024 10:43:07 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:56:52 +0000   Mon, 22 Jul 2024 10:43:07 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:56:52 +0000   Mon, 22 Jul 2024 10:43:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.5
	  Hostname:    ha-090000
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e090de7f3c6a411da7987789cad7e565
	  System UUID:                865e4f09-0000-0000-8c93-9ca2b7f6f541
	  Boot ID:                    303932c9-04d5-4f3a-ad0c-ae1b2083c258
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-2tcf2              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 coredns-7db6d8ff4d-lf5mv             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 coredns-7db6d8ff4d-mjc97             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     18m
	  kube-system                 etcd-ha-090000                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         18m
	  kube-system                 kindnet-mqxjd                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      18m
	  kube-system                 kube-apiserver-ha-090000             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-controller-manager-ha-090000    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-proxy-xzpdq                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-scheduler-ha-090000             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	  kube-system                 kube-vip-ha-090000                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m48s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         18m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 18m                    kube-proxy       
	  Normal  Starting                 4m26s                  kube-proxy       
	  Normal  Starting                 9m20s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  18m                    kubelet          Node ha-090000 status is now: NodeHasSufficientMemory
	  Normal  NodeAllocatableEnforced  18m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 18m                    kubelet          Starting kubelet.
	  Normal  NodeHasNoDiskPressure    18m                    kubelet          Node ha-090000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m                    kubelet          Node ha-090000 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           18m                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  NodeReady                18m                    kubelet          Node ha-090000 status is now: NodeReady
	  Normal  RegisteredNode           17m                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           16m                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           12m                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-090000 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-090000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-090000 status is now: NodeHasSufficientMemory
	  Normal  RegisteredNode           9m53s                  node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           9m52s                  node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           9m25s                  node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  Starting                 5m55s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m55s (x8 over 5m55s)  kubelet          Node ha-090000 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    5m55s (x8 over 5m55s)  kubelet          Node ha-090000 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     5m55s (x7 over 5m55s)  kubelet          Node ha-090000 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m55s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m51s                  node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           4m49s                  node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	  Normal  RegisteredNode           14s                    node-controller  Node ha-090000 event: Registered Node ha-090000 in Controller
	
	
	Name:               ha-090000-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-090000-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-090000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T03_44_17_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:44:14 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-090000-m02
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:01:45 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 10:56:49 +0000   Mon, 22 Jul 2024 10:44:14 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 10:56:49 +0000   Mon, 22 Jul 2024 10:44:14 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 10:56:49 +0000   Mon, 22 Jul 2024 10:44:14 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 10:56:49 +0000   Mon, 22 Jul 2024 10:44:36 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.6
	  Hostname:    ha-090000-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 653da24da8184d5683495d77f2663655
	  System UUID:                a2384298-0000-0000-98be-9d336c163b01
	  Boot ID:                    3ccf92f6-5554-420f-a8c6-c419f6124a20
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-8n2c6                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 etcd-ha-090000-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         17m
	  kube-system                 kindnet-xt575                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      17m
	  kube-system                 kube-apiserver-ha-090000-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-controller-manager-ha-090000-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-proxy-8wl7h                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-scheduler-ha-090000-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	  kube-system                 kube-vip-ha-090000-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         17m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m55s                  kube-proxy       
	  Normal   Starting                 17m                    kube-proxy       
	  Normal   Starting                 12m                    kube-proxy       
	  Normal   Starting                 9m59s                  kube-proxy       
	  Normal   NodeHasSufficientPID     17m (x7 over 17m)      kubelet          Node ha-090000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeHasSufficientMemory  17m (x8 over 17m)      kubelet          Node ha-090000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeAllocatableEnforced  17m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasNoDiskPressure    17m (x8 over 17m)      kubelet          Node ha-090000-m02 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           17m                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           17m                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           16m                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   NodeHasSufficientPID     12m                    kubelet          Node ha-090000-m02 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 12m                    kubelet          Node ha-090000-m02 has been rebooted, boot id: 296c4679-5b51-4230-a93d-85c12fa46a6b
	  Normal   NodeHasNoDiskPressure    12m                    kubelet          Node ha-090000-m02 status is now: NodeHasNoDiskPressure
	  Normal   Starting                 12m                    kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  12m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  12m                    kubelet          Node ha-090000-m02 status is now: NodeHasSufficientMemory
	  Normal   RegisteredNode           12m                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   NodeHasNoDiskPressure    10m (x8 over 10m)      kubelet          Node ha-090000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     10m (x7 over 10m)      kubelet          Node ha-090000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  10m (x8 over 10m)      kubelet          Node ha-090000-m02 status is now: NodeHasSufficientMemory
	  Normal   Starting                 10m                    kubelet          Starting kubelet.
	  Normal   RegisteredNode           9m53s                  node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           9m52s                  node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           9m25s                  node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   Starting                 5m12s                  kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  5m12s (x8 over 5m12s)  kubelet          Node ha-090000-m02 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    5m12s (x8 over 5m12s)  kubelet          Node ha-090000-m02 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     5m12s (x7 over 5m12s)  kubelet          Node ha-090000-m02 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  5m12s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           4m51s                  node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	  Normal   RegisteredNode           14s                    node-controller  Node ha-090000-m02 event: Registered Node ha-090000-m02 in Controller
	
	
	Name:               ha-090000-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-090000-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-090000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T03_48_19_0700
	                    minikube.k8s.io/version=v1.33.1
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 10:48:19 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-090000-m04
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 10:54:44 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Mon, 22 Jul 2024 10:54:34 +0000   Mon, 22 Jul 2024 10:57:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Mon, 22 Jul 2024 10:54:34 +0000   Mon, 22 Jul 2024 10:57:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Mon, 22 Jul 2024 10:54:34 +0000   Mon, 22 Jul 2024 10:57:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Mon, 22 Jul 2024 10:54:34 +0000   Mon, 22 Jul 2024 10:57:40 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.169.0.8
	  Hostname:    ha-090000-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 a0ea4e5276974bf88752eb7d59d19d28
	  System UUID:                f13543bd-0000-0000-a5c6-6cfffb7afaca
	  Boot ID:                    efc37e64-414b-4a11-8b92-5afe32b46caa
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                       CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                       ------------  ----------  ---------------  -------------  ---
	  default                     busybox-fc5497c4f-5xsl6    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m10s
	  kube-system                 kindnet-kqb2r              100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-8f92w           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   Starting                 7m15s                  kube-proxy       
	  Normal   NodeHasNoDiskPressure    13m (x2 over 13m)      kubelet          Node ha-090000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientMemory  13m (x2 over 13m)      kubelet          Node ha-090000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m (x2 over 13m)      kubelet          Node ha-090000-m04 status is now: NodeHasSufficientPID
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   RegisteredNode           13m                    node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-090000-m04 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           9m53s                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           9m52s                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           9m25s                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   NodeNotReady             9m13s                  node-controller  Node ha-090000-m04 status is now: NodeNotReady
	  Normal   Starting                 7m17s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  7m17s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  7m17s (x2 over 7m17s)  kubelet          Node ha-090000-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    7m17s (x2 over 7m17s)  kubelet          Node ha-090000-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     7m17s (x2 over 7m17s)  kubelet          Node ha-090000-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 7m17s                  kubelet          Node ha-090000-m04 has been rebooted, boot id: efc37e64-414b-4a11-8b92-5afe32b46caa
	  Normal   NodeReady                7m17s                  kubelet          Node ha-090000-m04 status is now: NodeReady
	  Normal   RegisteredNode           4m51s                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   RegisteredNode           4m49s                  node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	  Normal   NodeNotReady             4m11s                  node-controller  Node ha-090000-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           14s                    node-controller  Node ha-090000-m04 event: Registered Node ha-090000-m04 in Controller
	
	
	Name:               ha-090000-m05
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-090000-m05
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=8e5b1d22910d5d447b525af478862a848159d7b7
	                    minikube.k8s.io/name=ha-090000
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_07_22T04_01_23_0700
	                    minikube.k8s.io/version=v1.33.1
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/cri-dockerd.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Mon, 22 Jul 2024 11:01:19 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-090000-m05
	  AcquireTime:     <unset>
	  RenewTime:       Mon, 22 Jul 2024 11:01:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Mon, 22 Jul 2024 11:01:49 +0000   Mon, 22 Jul 2024 11:01:19 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Mon, 22 Jul 2024 11:01:49 +0000   Mon, 22 Jul 2024 11:01:19 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Mon, 22 Jul 2024 11:01:49 +0000   Mon, 22 Jul 2024 11:01:19 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Mon, 22 Jul 2024 11:01:49 +0000   Mon, 22 Jul 2024 11:01:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.169.0.9
	  Hostname:    ha-090000-m05
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164336Ki
	  pods:               110
	System Info:
	  Machine ID:                 e2df90def9004bafa5045d60e4165c06
	  System UUID:                96db4c4b-0000-0000-948e-0c1146e9b88d
	  Boot ID:                    8e2deaaf-9760-4eea-8ba8-fca0442d6126
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  docker://27.0.3
	  Kubelet Version:            v1.30.3
	  Kube-Proxy Version:         v1.30.3
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (7 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  kube-system                 etcd-ha-090000-m05                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         30s
	  kube-system                 kindnet-5f85x                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      32s
	  kube-system                 kube-apiserver-ha-090000-m05             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-controller-manager-ha-090000-m05    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	  kube-system                 kube-proxy-sm99p                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         32s
	  kube-system                 kube-scheduler-ha-090000-m05             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         30s
	  kube-system                 kube-vip-ha-090000-m05                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28s                kube-proxy       
	  Normal  NodeHasSufficientMemory  32s (x8 over 32s)  kubelet          Node ha-090000-m05 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    32s (x8 over 32s)  kubelet          Node ha-090000-m05 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     32s (x7 over 32s)  kubelet          Node ha-090000-m05 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  32s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           31s                node-controller  Node ha-090000-m05 event: Registered Node ha-090000-m05 in Controller
	  Normal  RegisteredNode           29s                node-controller  Node ha-090000-m05 event: Registered Node ha-090000-m05 in Controller
	  Normal  RegisteredNode           14s                node-controller  Node ha-090000-m05 event: Registered Node ha-090000-m05 in Controller
	
	
	==> dmesg <==
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.035617] ACPI BIOS Warning (bug): Incorrect checksum in table [DSDT] - 0xBE, should be 0x1B (20200925/tbprint-173)
	[  +0.007975] RETBleed: WARNING: Spectre v2 mitigation leaves CPU vulnerable to RETBleed attacks, data leaks possible!
	[  +5.373871] ACPI Error: Could not enable RealTimeClock event (20200925/evxfevnt-182)
	[  +0.000002] ACPI Warning: Could not enable fixed event - RealTimeClock (4) (20200925/evxface-618)
	[  +0.007077] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +2.539212] systemd-fstab-generator[127]: Ignoring "noauto" option for root device
	[  +2.228184] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000008] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +24.988715] systemd-fstab-generator[494]: Ignoring "noauto" option for root device
	[  +0.107475] systemd-fstab-generator[506]: Ignoring "noauto" option for root device
	[  +1.936648] systemd-fstab-generator[1035]: Ignoring "noauto" option for root device
	[  +0.257907] systemd-fstab-generator[1074]: Ignoring "noauto" option for root device
	[  +0.103829] systemd-fstab-generator[1086]: Ignoring "noauto" option for root device
	[  +0.114013] systemd-fstab-generator[1100]: Ignoring "noauto" option for root device
	[  +2.455872] systemd-fstab-generator[1315]: Ignoring "noauto" option for root device
	[  +0.050301] kauditd_printk_skb: 195 callbacks suppressed
	[  +0.044507] systemd-fstab-generator[1327]: Ignoring "noauto" option for root device
	[  +0.113079] systemd-fstab-generator[1339]: Ignoring "noauto" option for root device
	[  +0.126479] systemd-fstab-generator[1354]: Ignoring "noauto" option for root device
	[  +0.424114] systemd-fstab-generator[1518]: Ignoring "noauto" option for root device
	[Jul22 10:56] kauditd_printk_skb: 110 callbacks suppressed
	[ +21.703375] kauditd_printk_skb: 40 callbacks suppressed
	[Jul22 10:57] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [13f15d0cc8b3] <==
	{"level":"warn","ts":"2024-07-22T10:55:06.173331Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:54:59.683443Z","time spent":"6.489887603s","remote":"127.0.0.1:56816","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/22 10:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:55:06.17339Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"5.16404297s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-22T10:55:06.173403Z","caller":"traceutil/trace.go:171","msg":"trace[1906020764] range","detail":"{range_begin:/registry/pods/; range_end:/registry/pods0; }","duration":"5.164078582s","start":"2024-07-22T10:55:01.00932Z","end":"2024-07-22T10:55:06.173399Z","steps":["trace[1906020764] 'agreement among raft nodes before linearized reading'  (duration: 5.164064047s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:55:06.173414Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:55:01.009308Z","time spent":"5.164102318s","remote":"127.0.0.1:56742","response type":"/etcdserverpb.KV/Range","request count":0,"request size":36,"response count":0,"response size":0,"request content":"key:\"/registry/pods/\" range_end:\"/registry/pods0\" count_only:true "}
	2024/07/22 10:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:55:06.173464Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:55:01.643069Z","time spent":"4.530393805s","remote":"127.0.0.1:56816","response type":"/etcdserverpb.KV/Txn","request count":0,"request size":0,"response count":0,"response size":0,"request content":""}
	2024/07/22 10:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:55:06.173512Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"1.532556119s","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true ","response":"","error":"context canceled"}
	{"level":"info","ts":"2024-07-22T10:55:06.173523Z","caller":"traceutil/trace.go:171","msg":"trace[1235589650] range","detail":"{range_begin:/registry/storageclasses/; range_end:/registry/storageclasses0; }","duration":"1.532569274s","start":"2024-07-22T10:55:04.640951Z","end":"2024-07-22T10:55:06.17352Z","steps":["trace[1235589650] 'agreement among raft nodes before linearized reading'  (duration: 1.532556007s)"],"step_count":1}
	{"level":"warn","ts":"2024-07-22T10:55:06.173549Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-07-22T10:55:04.640945Z","time spent":"1.53259879s","remote":"127.0.0.1:56928","response type":"/etcdserverpb.KV/Range","request count":0,"request size":56,"response count":0,"response size":0,"request content":"key:\"/registry/storageclasses/\" range_end:\"/registry/storageclasses0\" count_only:true "}
	2024/07/22 10:55:06 WARNING: [core] [Server #8] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-07-22T10:55:06.197209Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-07-22T10:55:06.197255Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.169.0.5:2379: use of closed network connection"}
	{"level":"info","ts":"2024-07-22T10:55:06.198761Z","caller":"etcdserver/server.go:1462","msg":"skipped leadership transfer; local server is not leader","local-member-id":"b8c6c7563d17d844","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-07-22T10:55:06.198873Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.198885Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.198901Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.198951Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.198977Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.199001Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.19901Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5ef48be478f2a308"}
	{"level":"info","ts":"2024-07-22T10:55:06.201088Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-22T10:55:06.201174Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.169.0.5:2380"}
	{"level":"info","ts":"2024-07-22T10:55:06.201202Z","caller":"embed/etcd.go:377","msg":"closed etcd server","name":"ha-090000","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.169.0.5:2380"],"advertise-client-urls":["https://192.169.0.5:2379"]}
	
	
	==> etcd [288b4db4b467] <==
	{"level":"warn","ts":"2024-07-22T10:56:46.968533Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"127.0.0.1:38504","server-name":"","error":"EOF"}
	{"level":"info","ts":"2024-07-22T11:01:19.788767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(6842247547280597768 13314548521573537860) learners=(4873884792961339559)"}
	{"level":"info","ts":"2024-07-22T11:01:19.789089Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844","added-peer-id":"43a3846570a390a7","added-peer-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-07-22T11:01:19.789133Z","caller":"rafthttp/peer.go:133","msg":"starting remote peer","remote-peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:19.789393Z","caller":"rafthttp/pipeline.go:72","msg":"started HTTP pipelining with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:19.790541Z","caller":"rafthttp/peer.go:137","msg":"started remote peer","remote-peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:19.790601Z","caller":"rafthttp/transport.go:317","msg":"added remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7","remote-peer-urls":["https://192.169.0.9:2380"]}
	{"level":"info","ts":"2024-07-22T11:01:19.790624Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:19.790798Z","caller":"rafthttp/stream.go:169","msg":"started stream writer with remote peer","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:19.791027Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:19.791085Z","caller":"rafthttp/stream.go:395","msg":"started stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7"}
	{"level":"warn","ts":"2024-07-22T11:01:19.828198Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"43a3846570a390a7","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"warn","ts":"2024-07-22T11:01:20.821658Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"43a3846570a390a7","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-22T11:01:21.112421Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:21.122531Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:21.122966Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:21.124751Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"43a3846570a390a7","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-07-22T11:01:21.124797Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7"}
	{"level":"info","ts":"2024-07-22T11:01:21.126782Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"b8c6c7563d17d844","to":"43a3846570a390a7","stream-type":"stream Message"}
	{"level":"info","ts":"2024-07-22T11:01:21.126805Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"b8c6c7563d17d844","remote-peer-id":"43a3846570a390a7"}
	{"level":"warn","ts":"2024-07-22T11:01:21.168509Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.169.0.9:48788","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-07-22T11:01:21.821678Z","caller":"etcdhttp/peer.go:150","msg":"failed to promote a member","member-id":"43a3846570a390a7","error":"etcdserver: can only promote a learner member which is in sync with leader"}
	{"level":"info","ts":"2024-07-22T11:01:22.322415Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"b8c6c7563d17d844 switched to configuration voters=(4873884792961339559 6842247547280597768 13314548521573537860)"}
	{"level":"info","ts":"2024-07-22T11:01:22.322587Z","caller":"membership/cluster.go:535","msg":"promote member","cluster-id":"b73189effde9bc63","local-member-id":"b8c6c7563d17d844"}
	{"level":"info","ts":"2024-07-22T11:01:22.322622Z","caller":"etcdserver/server.go:1946","msg":"applied a configuration change through raft","local-member-id":"b8c6c7563d17d844","raft-conf-change":"ConfChangeAddNode","raft-conf-change-node-id":"43a3846570a390a7"}
	
	
	==> kernel <==
	 11:01:51 up 6 min,  0 users,  load average: 0.20, 0.12, 0.05
	Linux ha-090000 5.10.207 #1 SMP Thu Jul 18 22:16:38 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [0cf43afb12ba] <==
	I0722 10:54:34.307043       1 main.go:299] handling current node
	I0722 10:54:34.307109       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:54:34.307241       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:54:34.307497       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0722 10:54:34.307630       1 main.go:322] Node ha-090000-m03 has CIDR [10.244.2.0/24] 
	I0722 10:54:44.306852       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:54:44.306921       1 main.go:299] handling current node
	I0722 10:54:44.306943       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:54:44.306951       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:54:44.307088       1 main.go:295] Handling node with IPs: map[192.169.0.7:{}]
	I0722 10:54:44.307126       1 main.go:322] Node ha-090000-m03 has CIDR [10.244.2.0/24] 
	I0722 10:54:44.307182       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:54:44.307217       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:54:54.307325       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:54:54.307472       1 main.go:299] handling current node
	I0722 10:54:54.307635       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:54:54.307859       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 10:54:54.308266       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:54:54.308282       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:55:04.306079       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 10:55:04.306198       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 10:55:04.308236       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 10:55:04.308265       1 main.go:299] handling current node
	I0722 10:55:04.308274       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 10:55:04.308279       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kindnet [421317be1b45] <==
	I0722 11:01:25.603392       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 11:01:25.603527       1 main.go:299] handling current node
	I0722 11:01:25.603555       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 11:01:25.603569       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 11:01:25.603829       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 11:01:25.603923       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 11:01:25.604082       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0722 11:01:25.604120       1 main.go:322] Node ha-090000-m05 has CIDR [10.244.2.0/24] 
	I0722 11:01:25.604235       1 routes.go:62] Adding route {Ifindex: 0 Dst: 10.244.2.0/24 Src: <nil> Gw: 192.169.0.9 Flags: [] Table: 0} 
	I0722 11:01:35.604159       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 11:01:35.604212       1 main.go:299] handling current node
	I0722 11:01:35.604226       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 11:01:35.604233       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 11:01:35.604593       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 11:01:35.604632       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	I0722 11:01:35.604872       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0722 11:01:35.604914       1 main.go:322] Node ha-090000-m05 has CIDR [10.244.2.0/24] 
	I0722 11:01:45.604841       1 main.go:295] Handling node with IPs: map[192.169.0.9:{}]
	I0722 11:01:45.604882       1 main.go:322] Node ha-090000-m05 has CIDR [10.244.2.0/24] 
	I0722 11:01:45.605021       1 main.go:295] Handling node with IPs: map[192.169.0.5:{}]
	I0722 11:01:45.605051       1 main.go:299] handling current node
	I0722 11:01:45.605120       1 main.go:295] Handling node with IPs: map[192.169.0.6:{}]
	I0722 11:01:45.605151       1 main.go:322] Node ha-090000-m02 has CIDR [10.244.1.0/24] 
	I0722 11:01:45.605259       1 main.go:295] Handling node with IPs: map[192.169.0.8:{}]
	I0722 11:01:45.605289       1 main.go:322] Node ha-090000-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [4b11d2fc0144] <==
	I0722 10:56:02.936396       1 options.go:221] external host was not specified, using 192.169.0.5
	I0722 10:56:02.938177       1 server.go:148] Version: v1.30.3
	I0722 10:56:02.938401       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:56:04.233098       1 shared_informer.go:313] Waiting for caches to sync for node_authorizer
	I0722 10:56:04.237477       1 shared_informer.go:313] Waiting for caches to sync for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:56:04.240213       1 plugins.go:157] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0722 10:56:04.242473       1 plugins.go:160] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0722 10:56:04.246562       1 instance.go:299] Using reconciler: lease
	W0722 10:56:24.231990       1 logging.go:59] [core] [Channel #1 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0722 10:56:24.232857       1 logging.go:59] [core] [Channel #2 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1:2379", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	F0722 10:56:24.247519       1 instance.go:292] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-apiserver [945dd2cdb8d5] <==
	I0722 10:56:48.075828       1 naming_controller.go:291] Starting NamingConditionController
	I0722 10:56:48.075837       1 establishing_controller.go:76] Starting EstablishingController
	I0722 10:56:48.075846       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0722 10:56:48.075868       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0722 10:56:48.075874       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0722 10:56:48.207926       1 shared_informer.go:320] Caches are synced for cluster_authentication_trust_controller
	I0722 10:56:48.208391       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0722 10:56:48.211944       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0722 10:56:48.211985       1 shared_informer.go:320] Caches are synced for configmaps
	I0722 10:56:48.212771       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0722 10:56:48.213106       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0722 10:56:48.216923       1 shared_informer.go:320] Caches are synced for crd-autoregister
	I0722 10:56:48.217042       1 aggregator.go:165] initial CRD sync complete...
	I0722 10:56:48.217145       1 autoregister_controller.go:141] Starting autoregister controller
	I0722 10:56:48.217189       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0722 10:56:48.217275       1 cache.go:39] Caches are synced for autoregister controller
	I0722 10:56:48.221104       1 handler_discovery.go:447] Starting ResourceDiscoveryManager
	I0722 10:56:48.239521       1 shared_informer.go:320] Caches are synced for node_authorizer
	I0722 10:56:48.242525       1 shared_informer.go:320] Caches are synced for *generic.policySource[*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicy,*k8s.io/api/admissionregistration/v1.ValidatingAdmissionPolicyBinding,k8s.io/apiserver/pkg/admission/plugin/policy/validating.Validator]
	I0722 10:56:48.242684       1 policy_source.go:224] refreshing policies
	E0722 10:56:48.283526       1 controller.go:97] Error removing old endpoints from kubernetes service: no API server IP addresses were listed in storage, refusing to erase all endpoints for the kubernetes Service
	I0722 10:56:48.308533       1 controller.go:615] quota admission added evaluator for: leases.coordination.k8s.io
	I0722 10:56:49.071315       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0722 10:57:23.501088       1 controller.go:615] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0722 10:57:23.512462       1 controller.go:615] quota admission added evaluator for: endpoints
	
	
	==> kube-controller-manager [0469220f71ca] <==
	I0722 10:56:03.617244       1 serving.go:380] Generated self-signed cert in-memory
	I0722 10:56:04.853391       1 controllermanager.go:189] "Starting" version="v1.30.3"
	I0722 10:56:04.853427       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:56:04.854617       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0722 10:56:04.854850       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0722 10:56:04.854944       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0722 10:56:04.855121       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	E0722 10:56:25.255891       1 controllermanager.go:234] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.169.0.5:8443/healthz\": dial tcp 192.169.0.5:8443: connect: connection refused"
	
	
	==> kube-controller-manager [38dfb2ab5697] <==
	I0722 10:57:40.637034       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-090000-m03"
	I0722 10:57:40.651093       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/etcd-ha-090000-m03"
	I0722 10:57:40.651349       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-090000-m03"
	I0722 10:57:40.666222       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-apiserver-ha-090000-m03"
	I0722 10:57:40.666259       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lf6b4"
	I0722 10:57:40.681166       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kindnet-lf6b4"
	I0722 10:57:40.681200       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-s5kg7"
	I0722 10:57:40.694662       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-proxy-s5kg7"
	I0722 10:57:40.694710       1 gc_controller.go:344] "PodGC is force deleting Pod" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-090000-m03"
	I0722 10:57:40.709068       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" logger="pod-garbage-collector-controller" pod="kube-system/kube-vip-ha-090000-m03"
	I0722 10:57:40.764272       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="28.008583ms"
	I0722 10:57:40.764797       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="default/busybox-fc5497c4f" duration="16.544µs"
	I0722 10:58:03.618800       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="9.381236ms"
	I0722 10:58:03.621760       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rfbkc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rfbkc\": the object has been modified; please apply your changes to the latest version and try again"
	I0722 10:58:03.621937       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"cbf53cbe-6c6b-4eb8-83fb-57cb4eb26b48", APIVersion:"v1", ResourceVersion:"259", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rfbkc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rfbkc": the object has been modified; please apply your changes to the latest version and try again
	I0722 10:58:03.622633       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="3.074861ms"
	I0722 10:58:03.644685       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="10.170528ms"
	I0722 10:58:03.645698       1 replica_set.go:676] "Finished syncing" logger="replicaset-controller" kind="ReplicaSet" key="kube-system/coredns-7db6d8ff4d" duration="34.7µs"
	I0722 10:58:03.645633       1 event.go:377] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"cbf53cbe-6c6b-4eb8-83fb-57cb4eb26b48", APIVersion:"v1", ResourceVersion:"259", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-rfbkc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-rfbkc": the object has been modified; please apply your changes to the latest version and try again
	I0722 10:58:03.645487       1 endpointslice_controller.go:311] "Error syncing endpoint slices for service, retrying" logger="endpointslice-controller" key="kube-system/kube-dns" err="failed to update kube-dns-rfbkc EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-rfbkc\": the object has been modified; please apply your changes to the latest version and try again"
	E0722 11:01:19.417992       1 certificate_controller.go:146] Sync csr-wh5j7 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wh5j7": the object has been modified; please apply your changes to the latest version and try again
	E0722 11:01:19.418752       1 certificate_controller.go:146] Sync csr-wh5j7 failed with : error updating approval for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-wh5j7": the object has been modified; please apply your changes to the latest version and try again
	I0722 11:01:19.506241       1 actual_state_of_world.go:543] "Failed to update statusUpdateNeeded field in actual state of world" logger="persistentvolume-attach-detach-controller" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-090000-m05\" does not exist"
	I0722 11:01:19.514934       1 range_allocator.go:381] "Set node PodCIDR" logger="node-ipam-controller" node="ha-090000-m05" podCIDRs=["10.244.2.0/24"]
	I0722 11:01:20.800128       1 node_lifecycle_controller.go:879] "Missing timestamp for Node. Assuming now as a timestamp" logger="node-lifecycle-controller" node="ha-090000-m05"
	
	
	==> kube-proxy [391ccb3367a9] <==
	I0722 10:52:31.180149       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:52:31.201174       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0722 10:52:31.256621       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:52:31.256706       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:52:31.256721       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:52:31.259083       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:52:31.259774       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:52:31.259804       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:52:31.261784       1 config.go:192] "Starting service config controller"
	I0722 10:52:31.262305       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:52:31.261811       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:52:31.262481       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:52:31.264064       1 config.go:319] "Starting node config controller"
	I0722 10:52:31.264089       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:52:31.362703       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:52:31.362744       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:52:31.364747       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-proxy [9ea9aba3e1e9] <==
	I0722 10:57:24.900497       1 server_linux.go:69] "Using iptables proxy"
	I0722 10:57:24.919847       1 server.go:1062] "Successfully retrieved node IP(s)" IPs=["192.169.0.5"]
	I0722 10:57:24.958255       1 server_linux.go:143] "No iptables support for family" ipFamily="IPv6"
	I0722 10:57:24.958402       1 server.go:661] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0722 10:57:24.958517       1 server_linux.go:165] "Using iptables Proxier"
	I0722 10:57:24.961727       1 proxier.go:243] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0722 10:57:24.962180       1 server.go:872] "Version info" version="v1.30.3"
	I0722 10:57:24.962261       1 server.go:874] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0722 10:57:24.964654       1 config.go:192] "Starting service config controller"
	I0722 10:57:24.964872       1 shared_informer.go:313] Waiting for caches to sync for service config
	I0722 10:57:24.964945       1 config.go:101] "Starting endpoint slice config controller"
	I0722 10:57:24.964997       1 shared_informer.go:313] Waiting for caches to sync for endpoint slice config
	I0722 10:57:24.966344       1 config.go:319] "Starting node config controller"
	I0722 10:57:24.967117       1 shared_informer.go:313] Waiting for caches to sync for node config
	I0722 10:57:25.066129       1 shared_informer.go:320] Caches are synced for endpoint slice config
	I0722 10:57:25.066147       1 shared_informer.go:320] Caches are synced for service config
	I0722 10:57:25.067691       1 shared_informer.go:320] Caches are synced for node config
	
	
	==> kube-scheduler [2c775554c943] <==
	W0722 10:51:45.397777       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 10:51:45.397808       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 10:51:45.397839       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0722 10:51:45.397871       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 10:51:45.397899       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:51:45.397947       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0722 10:51:45.397980       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0722 10:51:45.413802       1 reflector.go:547] runtime/asm_amd64.s:1695: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0722 10:51:45.422126       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:51:45.422305       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:51:45.422479       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 10:51:45.422614       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:51:45.422760       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 10:51:45.422889       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0722 10:51:45.423057       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 10:51:45.423192       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:51:45.423231       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0722 10:51:45.423239       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0722 10:51:45.423332       1 reflector.go:150] runtime/asm_amd64.s:1695: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0722 10:52:01.354860       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 10:54:41.357190       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5xsl6\": pod busybox-fc5497c4f-5xsl6 is already assigned to node \"ha-090000-m04\"" plugin="DefaultBinder" pod="default/busybox-fc5497c4f-5xsl6" node="ha-090000-m04"
	E0722 10:54:41.357320       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod 6d9ce972-1b0d-49c5-944b-6beca3ab4c50(default/busybox-fc5497c4f-5xsl6) wasn't assumed so cannot be forgotten" pod="default/busybox-fc5497c4f-5xsl6"
	E0722 10:54:41.357354       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-fc5497c4f-5xsl6\": pod busybox-fc5497c4f-5xsl6 is already assigned to node \"ha-090000-m04\"" pod="default/busybox-fc5497c4f-5xsl6"
	I0722 10:54:41.357392       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-fc5497c4f-5xsl6" node="ha-090000-m04"
	E0722 10:55:06.233408       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [cbe7a7a54b05] <==
	W0722 10:56:48.194169       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0722 10:56:48.194206       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0722 10:56:48.194280       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:48.194336       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:48.195930       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:48.195970       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:48.196165       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0722 10:56:48.196201       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0722 10:56:48.196454       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0722 10:56:48.196487       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0722 10:56:48.197536       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0722 10:56:48.197572       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0722 10:56:48.197667       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0722 10:56:48.197700       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0722 10:56:48.197762       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0722 10:56:48.197795       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0722 10:56:48.197869       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0722 10:56:48.197900       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0722 10:56:48.197990       1 reflector.go:547] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0722 10:56:48.198023       1 reflector.go:150] k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	I0722 10:57:06.865178       1 shared_informer.go:320] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0722 11:01:19.562321       1 framework.go:1286] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-5f85x\": pod kindnet-5f85x is already assigned to node \"ha-090000-m05\"" plugin="DefaultBinder" pod="kube-system/kindnet-5f85x" node="ha-090000-m05"
	E0722 11:01:19.562730       1 schedule_one.go:338] "scheduler cache ForgetPod failed" err="pod f8b65156-15cc-4b6a-a46a-5aa92732c2c7(kube-system/kindnet-5f85x) wasn't assumed so cannot be forgotten" pod="kube-system/kindnet-5f85x"
	E0722 11:01:19.562769       1 schedule_one.go:1046] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-5f85x\": pod kindnet-5f85x is already assigned to node \"ha-090000-m05\"" pod="kube-system/kindnet-5f85x"
	I0722 11:01:19.562783       1 schedule_one.go:1059] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-5f85x" node="ha-090000-m05"
	
	
	==> kubelet <==
	Jul 22 10:57:23 ha-090000 kubelet[1525]: I0722 10:57:23.403242    1525 reconciler_common.go:247] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7-xtables-lock\") pod \"kube-proxy-xzpdq\" (UID: \"d7b9fd02-6f5f-47a4-afe2-1dab0d9141b7\") " pod="kube-system/kube-proxy-xzpdq"
	Jul 22 10:57:55 ha-090000 kubelet[1525]: I0722 10:57:55.092868    1525 scope.go:117] "RemoveContainer" containerID="20b3e825f92688bc16eac5677dae4924c90dbb460ee6bd408c84b27166d3492d"
	Jul 22 10:57:55 ha-090000 kubelet[1525]: I0722 10:57:55.093131    1525 scope.go:117] "RemoveContainer" containerID="ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d"
	Jul 22 10:57:55 ha-090000 kubelet[1525]: E0722 10:57:55.093241    1525 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"storage-provisioner\" with CrashLoopBackOff: \"back-off 10s restarting failed container=storage-provisioner pod=storage-provisioner_kube-system(c1214845-bf0e-4808-9e11-faf18dd3cb3f)\"" pod="kube-system/storage-provisioner" podUID="c1214845-bf0e-4808-9e11-faf18dd3cb3f"
	Jul 22 10:57:56 ha-090000 kubelet[1525]: E0722 10:57:56.362531    1525 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:57:56 ha-090000 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:57:56 ha-090000 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:57:56 ha-090000 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:57:56 ha-090000 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:58:10 ha-090000 kubelet[1525]: I0722 10:58:10.329825    1525 scope.go:117] "RemoveContainer" containerID="ea06caf73a7d0c82f3188bf4c821f988c6d96a724553f9eb2405d48823ccb42d"
	Jul 22 10:58:56 ha-090000 kubelet[1525]: E0722 10:58:56.365702    1525 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:58:56 ha-090000 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:58:56 ha-090000 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:58:56 ha-090000 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:58:56 ha-090000 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 10:59:56 ha-090000 kubelet[1525]: E0722 10:59:56.363174    1525 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 10:59:56 ha-090000 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 10:59:56 ha-090000 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 10:59:56 ha-090000 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 10:59:56 ha-090000 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Jul 22 11:00:56 ha-090000 kubelet[1525]: E0722 11:00:56.361898    1525 iptables.go:577] "Could not set up iptables canary" err=<
	Jul 22 11:00:56 ha-090000 kubelet[1525]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Jul 22 11:00:56 ha-090000 kubelet[1525]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Jul 22 11:00:56 ha-090000 kubelet[1525]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Jul 22 11:00:56 ha-090000 kubelet[1525]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p ha-090000 -n ha-090000
helpers_test.go:261: (dbg) Run:  kubectl --context ha-090000 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiControlPlane/serial/AddSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiControlPlane/serial/AddSecondaryNode (195.36s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-572000 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p mount-start-1-572000 ssh -- mount | grep 9p
mount_start_test.go:127: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p mount-start-1-572000 ssh -- mount | grep 9p: exit status 1 (122.207758ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
mount_start_test.go:129: failed to get mount information: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-572000 -n mount-start-1-572000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p mount-start-1-572000 -n mount-start-1-572000: exit status 6 (149.065319ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 04:06:03.409976    4597 status.go:417] kubeconfig endpoint: get endpoint: "mount-start-1-572000" does not appear in /Users/jenkins/minikube-integration/19313-1111/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "mount-start-1-572000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestMountStart/serial/VerifyMountFirst (0.43s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (77.75s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-533000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:136: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-533000 --no-kubernetes --driver=hyperkit : exit status 90 (1m17.598623799s)

                                                
                                                
-- stdout --
	* [NoKubernetes-533000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting minikube without Kubernetes in cluster NoKubernetes-533000
	* Creating hyperkit VM (CPUs=2, Memory=6000MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 11:30:14 NoKubernetes-533000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:14.326193246Z" level=info msg="Starting up"
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:14.326767644Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:14.327312975Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=524
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.343335386Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361026873Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361097889Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361159492Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361194246Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361304519Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361345071Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361486439Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361525659Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361554855Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361583755Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361667393Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.361840114Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.363362208Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.363416189Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.363593558Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.363636976Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.363731238Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.363798469Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.366535449Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.366613520Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.366658390Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.366787723Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.366831850Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.366918281Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372337821Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372424126Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372459802Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372472635Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372482768Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372492031Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372500636Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372510191Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372519328Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372528691Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372537032Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372544307Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372557196Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372566336Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372574227Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372582868Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372590955Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372605707Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372615777Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372626404Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372635197Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372645276Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372652872Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372660306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372668035Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372679631Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372693194Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372701027Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372708251Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372756246Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372771020Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372779058Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372786616Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372793189Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372801104Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372807682Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372929216Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.372986860Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.373056627Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:30:14 NoKubernetes-533000 dockerd[524]: time="2024-07-22T11:30:14.373090871Z" level=info msg="containerd successfully booted in 0.030575s"
	Jul 22 11:30:15 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:15.354700967Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:30:15 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:15.362163098Z" level=info msg="Loading containers: start."
	Jul 22 11:30:15 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:15.458463630Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:30:15 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:15.542705101Z" level=info msg="Loading containers: done."
	Jul 22 11:30:15 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:15.550070224Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:30:15 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:15.550204837Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:30:15 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:15.578301132Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:30:15 NoKubernetes-533000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:30:15 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:15.578437847Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:30:16 NoKubernetes-533000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:30:16 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:16.510265747Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:30:16 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:16.511156238Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:30:16 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:16.511258909Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:30:16 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:16.511317025Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=moby
	Jul 22 11:30:16 NoKubernetes-533000 dockerd[517]: time="2024-07-22T11:30:16.511331166Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:30:17 NoKubernetes-533000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:30:17 NoKubernetes-533000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:30:17 NoKubernetes-533000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:30:17 NoKubernetes-533000 dockerd[916]: time="2024-07-22T11:30:17.552370932Z" level=info msg="Starting up"
	Jul 22 11:31:18 NoKubernetes-533000 dockerd[916]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 11:31:18 NoKubernetes-533000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 11:31:18 NoKubernetes-533000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 11:31:18 NoKubernetes-533000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
no_kubernetes_test.go:138: failed to start minikube with args: "out/minikube-darwin-amd64 start -p NoKubernetes-533000 --no-kubernetes --driver=hyperkit " : exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-533000 -n NoKubernetes-533000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p NoKubernetes-533000 -n NoKubernetes-533000: exit status 6 (150.633858ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 04:31:18.203600    6337 status.go:417] kubeconfig endpoint: get endpoint: "NoKubernetes-533000" does not appear in /Users/jenkins/minikube-integration/19313-1111/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "NoKubernetes-533000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestNoKubernetes/serial/Start (77.75s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (76.92s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-781000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p embed-certs-781000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: exit status 90 (1m16.707599155s)

                                                
                                                
-- stdout --
	* [embed-certs-781000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on user configuration
	* Starting "embed-certs-781000" primary control-plane node in "embed-certs-781000" cluster
	* Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:52:53.038639    7354 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:52:53.038938    7354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:52:53.038943    7354 out.go:304] Setting ErrFile to fd 2...
	I0722 04:52:53.038947    7354 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:52:53.039129    7354 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 04:52:53.040728    7354 out.go:298] Setting JSON to false
	I0722 04:52:53.063921    7354 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":6742,"bootTime":1721642431,"procs":460,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 04:52:53.064016    7354 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 04:52:53.087534    7354 out.go:177] * [embed-certs-781000] minikube v1.33.1 on Darwin 14.5
	I0722 04:52:53.129545    7354 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 04:52:53.129576    7354 notify.go:220] Checking for updates...
	I0722 04:52:53.172306    7354 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 04:52:53.214377    7354 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 04:52:53.256471    7354 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 04:52:53.329366    7354 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	I0722 04:52:53.387572    7354 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 04:52:53.408795    7354 config.go:182] Loaded profile config "default-k8s-diff-port-961000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:52:53.408902    7354 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 04:52:53.485436    7354 out.go:177] * Using the hyperkit driver based on user configuration
	I0722 04:52:53.543459    7354 start.go:297] selected driver: hyperkit
	I0722 04:52:53.543471    7354 start.go:901] validating driver "hyperkit" against <nil>
	I0722 04:52:53.543482    7354 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 04:52:53.546620    7354 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:52:53.546744    7354 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19313-1111/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0722 04:52:53.555630    7354 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0722 04:52:53.559783    7354 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:52:53.559810    7354 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0722 04:52:53.559854    7354 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 04:52:53.560068    7354 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0722 04:52:53.560123    7354 cni.go:84] Creating CNI manager for ""
	I0722 04:52:53.560141    7354 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 04:52:53.560152    7354 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 04:52:53.560221    7354 start.go:340] cluster config:
	{Name:embed-certs-781000 KeepContext:false EmbedCerts:true MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-781000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthS
ock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 04:52:53.560312    7354 iso.go:125] acquiring lock: {Name:mk28fa3b914b659bb36b0449a0ad3ab1345dae32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 04:52:53.597529    7354 out.go:177] * Starting "embed-certs-781000" primary control-plane node in "embed-certs-781000" cluster
	I0722 04:52:53.618236    7354 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 04:52:53.618273    7354 preload.go:146] Found local preload: /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 04:52:53.618291    7354 cache.go:56] Caching tarball of preloaded images
	I0722 04:52:53.618422    7354 preload.go:172] Found /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 in cache, skipping download
	I0722 04:52:53.618433    7354 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 04:52:53.618513    7354 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/embed-certs-781000/config.json ...
	I0722 04:52:53.618529    7354 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/embed-certs-781000/config.json: {Name:mk6d4448fb361620d2b14f3492ea6bbdb3fb909d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 04:52:53.619123    7354 start.go:360] acquireMachinesLock for embed-certs-781000: {Name:mk52223550765842aacf96640479870ec8b5e985 Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0722 04:52:53.619177    7354 start.go:364] duration metric: took 44.596µs to acquireMachinesLock for "embed-certs-781000"
	I0722 04:52:53.619198    7354 start.go:93] Provisioning new machine with config: &{Name:embed-certs-781000 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:2200 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:embed-certs-781000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Mou
ntPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}
	I0722 04:52:53.619243    7354 start.go:125] createHost starting for "" (driver="hyperkit")
	I0722 04:52:53.677564    7354 out.go:204] * Creating hyperkit VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0722 04:52:53.677826    7354 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:52:53.677897    7354 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:52:53.688290    7354 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:55539
	I0722 04:52:53.688725    7354 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:52:53.689172    7354 main.go:141] libmachine: Using API Version  1
	I0722 04:52:53.689183    7354 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:52:53.689483    7354 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:52:53.689629    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetMachineName
	I0722 04:52:53.689755    7354 main.go:141] libmachine: (embed-certs-781000) Calling .DriverName
	I0722 04:52:53.689863    7354 start.go:159] libmachine.API.Create for "embed-certs-781000" (driver="hyperkit")
	I0722 04:52:53.689890    7354 client.go:168] LocalClient.Create starting
	I0722 04:52:53.689926    7354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem
	I0722 04:52:53.689982    7354 main.go:141] libmachine: Decoding PEM data...
	I0722 04:52:53.689999    7354 main.go:141] libmachine: Parsing certificate...
	I0722 04:52:53.690063    7354 main.go:141] libmachine: Reading certificate data from /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem
	I0722 04:52:53.690102    7354 main.go:141] libmachine: Decoding PEM data...
	I0722 04:52:53.690115    7354 main.go:141] libmachine: Parsing certificate...
	I0722 04:52:53.690134    7354 main.go:141] libmachine: Running pre-create checks...
	I0722 04:52:53.690142    7354 main.go:141] libmachine: (embed-certs-781000) Calling .PreCreateCheck
	I0722 04:52:53.690239    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:52:53.690449    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetConfigRaw
	I0722 04:52:53.690933    7354 main.go:141] libmachine: Creating machine...
	I0722 04:52:53.690942    7354 main.go:141] libmachine: (embed-certs-781000) Calling .Create
	I0722 04:52:53.691026    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:52:53.691154    7354 main.go:141] libmachine: (embed-certs-781000) DBG | I0722 04:52:53.691023    7367 common.go:145] Making disk image using store path: /Users/jenkins/minikube-integration/19313-1111/.minikube
	I0722 04:52:53.691219    7354 main.go:141] libmachine: (embed-certs-781000) Downloading /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/boot2docker.iso from file:///Users/jenkins/minikube-integration/19313-1111/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso...
	I0722 04:52:53.871182    7354 main.go:141] libmachine: (embed-certs-781000) DBG | I0722 04:52:53.871080    7367 common.go:152] Creating ssh key: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/id_rsa...
	I0722 04:52:54.007535    7354 main.go:141] libmachine: (embed-certs-781000) DBG | I0722 04:52:54.007451    7367 common.go:158] Creating raw disk image: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/embed-certs-781000.rawdisk...
	I0722 04:52:54.007552    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Writing magic tar header
	I0722 04:52:54.007592    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Writing SSH key tar header
	I0722 04:52:54.007917    7354 main.go:141] libmachine: (embed-certs-781000) DBG | I0722 04:52:54.007854    7367 common.go:172] Fixing permissions on /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000 ...
	I0722 04:52:54.377615    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:52:54.377641    7354 main.go:141] libmachine: (embed-certs-781000) DBG | clean start, hyperkit pid file doesn't exist: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/hyperkit.pid
	I0722 04:52:54.377724    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Using UUID 93fae6b3-704b-4640-8a32-b05887535bc5
	I0722 04:52:54.431466    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Generated MAC 62:53:74:c7:db:88
	I0722 04:52:54.431487    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Starting with cmdline: loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-781000
	I0722 04:52:54.431513    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 DEBUG: hyperkit: Start &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"93fae6b3-704b-4640-8a32-b05887535bc5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:
(*os.Process)(nil)}
	I0722 04:52:54.431544    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 DEBUG: hyperkit: check &hyperkit.HyperKit{HyperKit:"/usr/local/bin/hyperkit", Argv0:"", StateDir:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000", VPNKitSock:"", VPNKitUUID:"", VPNKitPreferredIPv4:"", UUID:"93fae6b3-704b-4640-8a32-b05887535bc5", Disks:[]hyperkit.Disk{(*hyperkit.RawDisk)(0xc0001d0240)}, ISOImages:[]string{"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/boot2docker.iso"}, VSock:false, VSockDir:"", VSockPorts:[]int(nil), VSockGuestCID:3, VMNet:true, Sockets9P:[]hyperkit.Socket9P(nil), Kernel:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/bzimage", Initrd:"/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/initrd", Bootrom:"", CPUs:2, Memory:2200, Console:1, Serials:[]hyperkit.Serial(nil), Pid:0, Arguments:[]string(nil), CmdLine:"", process:
(*os.Process)(nil)}
	I0722 04:52:54.431610    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 DEBUG: hyperkit: Arguments: []string{"-A", "-u", "-F", "/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/hyperkit.pid", "-c", "2", "-m", "2200M", "-s", "0:0,hostbridge", "-s", "31,lpc", "-s", "1:0,virtio-net", "-U", "93fae6b3-704b-4640-8a32-b05887535bc5", "-s", "2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/embed-certs-781000.rawdisk", "-s", "3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/boot2docker.iso", "-s", "4,virtio-rnd", "-l", "com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/console-ring", "-f", "kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/bzimage,/Users/jenkins/minikube-
integration/19313-1111/.minikube/machines/embed-certs-781000/initrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-781000"}
	I0722 04:52:54.431660    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 DEBUG: hyperkit: CmdLine: "/usr/local/bin/hyperkit -A -u -F /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/hyperkit.pid -c 2 -m 2200M -s 0:0,hostbridge -s 31,lpc -s 1:0,virtio-net -U 93fae6b3-704b-4640-8a32-b05887535bc5 -s 2:0,virtio-blk,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/embed-certs-781000.rawdisk -s 3,ahci-cd,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/boot2docker.iso -s 4,virtio-rnd -l com1,autopty=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/tty,log=/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/console-ring -f kexec,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/bzimage,/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/i
nitrd,earlyprintk=serial loglevel=3 console=ttyS0 console=tty0 noembed nomodeset norestore waitusb=10 systemd.legacy_systemd_cgroup_controller=yes random.trust_cpu=on hw_rng_model=virtio base host=embed-certs-781000"
	I0722 04:52:54.431679    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 DEBUG: hyperkit: Redirecting stdout/stderr to logger
	I0722 04:52:54.434895    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 DEBUG: hyperkit: Pid is 7378
	I0722 04:52:54.435300    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Attempt 0
	I0722 04:52:54.435315    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:52:54.435451    7354 main.go:141] libmachine: (embed-certs-781000) DBG | hyperkit pid from json: 7378
	I0722 04:52:54.436855    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Searching for 62:53:74:c7:db:88 in /var/db/dhcpd_leases ...
	I0722 04:52:54.436982    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I0722 04:52:54.436999    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:9a:b4:f9:aa:74:a0 ID:1,9a:b4:f9:aa:74:a0 Lease:0x669f9905}
	I0722 04:52:54.437016    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:7a:ee:70:ee:14:f5 ID:1,7a:ee:70:ee:14:f5 Lease:0x669f986e}
	I0722 04:52:54.437028    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:6e:be:a1:44:5:4b ID:1,6e:be:a1:44:5:4b Lease:0x669f96ec}
	I0722 04:52:54.437041    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:42:1e:5:40:b8:f4 ID:1,42:1e:5:40:b8:f4 Lease:0x669f96aa}
	I0722 04:52:54.437051    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:29:87:f9:fe:99 ID:1,96:29:87:f9:fe:99 Lease:0x669f954f}
	I0722 04:52:54.437069    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:f7:56:da:5e:c ID:1,56:f7:56:da:5e:c Lease:0x669e4387}
	I0722 04:52:54.437083    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:8e:5:f6:92:46:58 ID:1,8e:5:f6:92:46:58 Lease:0x669f94d8}
	I0722 04:52:54.437099    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:7e:32:a8:ed:65:b7 ID:1,7e:32:a8:ed:65:b7 Lease:0x669f94bc}
	I0722 04:52:54.437113    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:2a:2b:92:9c:a2:5a ID:1,2a:2b:92:9c:a2:5a Lease:0x669f94af}
	I0722 04:52:54.437123    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:fa:b2:68:99:cc ID:1,32:fa:b2:68:99:cc Lease:0x669e4324}
	I0722 04:52:54.437156    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:fa:9b:b5:7:d4:be ID:1,fa:9b:b5:7:d4:be Lease:0x669e4323}
	I0722 04:52:54.437176    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:e4:f6:76:b9:c0 ID:1,ae:e4:f6:76:b9:c0 Lease:0x669e42b7}
	I0722 04:52:54.437191    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:4e:81:18:7d:a8:7e ID:1,4e:81:18:7d:a8:7e Lease:0x669f93fd}
	I0722 04:52:54.437214    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:65:82:aa:c5 ID:1,4a:af:65:82:aa:c5 Lease:0x669f93c8}
	I0722 04:52:54.437239    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:12:af:ac:fa:ca:4c ID:1,12:af:ac:fa:ca:4c Lease:0x669f93d2}
	I0722 04:52:54.437269    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:26:4:fc:35:c8:96 ID:1,26:4:fc:35:c8:96 Lease:0x669f9389}
	I0722 04:52:54.437285    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:de:22:65:a4:f5:e ID:1,de:22:65:a4:f5:e Lease:0x669f9319}
	I0722 04:52:54.437301    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b6:c9:f9:a0:af:9a ID:1,b6:c9:f9:a0:af:9a Lease:0x669f92aa}
	I0722 04:52:54.437317    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7a:6:64:ab:c5:b3 ID:1,7a:6:64:ab:c5:b3 Lease:0x669f922f}
	I0722 04:52:54.437329    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:ba:63:18:2b:e1 ID:1,4a:ba:63:18:2b:e1 Lease:0x669e403d}
	I0722 04:52:54.437341    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:5a:35:14:69:f3 ID:1,6a:5a:35:14:69:f3 Lease:0x669e3f9a}
	I0722 04:52:54.437354    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:95:16:da:1f:ab ID:1,fa:95:16:da:1f:ab Lease:0x669f9174}
	I0722 04:52:54.437367    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:c2:21:26:80:b3:ec ID:1,c2:21:26:80:b3:ec Lease:0x669f9136}
	I0722 04:52:54.437378    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:46:5a:9d:73:3c:7b ID:1,46:5a:9d:73:3c:7b Lease:0x669e3d1c}
	I0722 04:52:54.437391    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:aa:fb:72:a3:d5:2a ID:1,aa:fb:72:a3:d5:2a Lease:0x669f8e5d}
	I0722 04:52:54.437404    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:f9:3a:b4:3:19 ID:1,b2:f9:3a:b4:3:19 Lease:0x669f8e35}
	I0722 04:52:54.437417    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d6:39:9d:61:bc:c3 ID:1,d6:39:9d:61:bc:c3 Lease:0x669f8df2}
	I0722 04:52:54.437440    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:f2:a:93:ea:6:d8 ID:1,f2:a:93:ea:6:d8 Lease:0x669e3c67}
	I0722 04:52:54.437460    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:a9:c8:fd:de:59 ID:1,92:a9:c8:fd:de:59 Lease:0x669f8ce7}
	I0722 04:52:54.437477    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8c7f}
	I0722 04:52:54.437494    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 04:52:54.437508    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8c45}
	I0722 04:52:54.437525    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 04:52:54.437539    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:5e:d2:75:4c:13:a8 ID:1,5e:d2:75:4c:13:a8 Lease:0x669f882a}
	I0722 04:52:54.437553    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:9c:b9:4c:57:c9 ID:1,42:9c:b9:4c:57:c9 Lease:0x669f8766}
	I0722 04:52:54.437565    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:5f:b2:6a:58:a7 ID:1,f6:5f:b2:6a:58:a7 Lease:0x669f8605}
	I0722 04:52:54.443206    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: Using fd 5 for I/O notifications
	I0722 04:52:54.451973    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/boot2docker.iso: fcntl(F_PUNCHHOLE) Operation not permitted: block device will not support TRIM/DISCARD
	I0722 04:52:54.453084    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 04:52:54.453107    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 04:52:54.453115    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 04:52:54.453123    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 04:52:54.843754    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 0
	I0722 04:52:54.843770    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 0
	I0722 04:52:54.958380    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 2 bit: 22 unspecified don't care: bit is 0
	I0722 04:52:54.958401    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 12 unspecified don't care: bit is 0
	I0722 04:52:54.958419    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 4 bit: 20 unspecified don't care: bit is 0
	I0722 04:52:54.958429    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: vmx_set_ctlreg: cap_field: 3 bit: 13 unspecified don't care: bit is 0
	I0722 04:52:54.959291    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: rdmsr to register 0x3a on vcpu 1
	I0722 04:52:54.959302    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:52:54 INFO : hyperkit: stderr: rdmsr to register 0x140 on vcpu 1
	I0722 04:52:56.438932    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Attempt 1
	I0722 04:52:56.438949    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:52:56.438974    7354 main.go:141] libmachine: (embed-certs-781000) DBG | hyperkit pid from json: 7378
	I0722 04:52:56.439805    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Searching for 62:53:74:c7:db:88 in /var/db/dhcpd_leases ...
	I0722 04:52:56.439870    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I0722 04:52:56.439881    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:9a:b4:f9:aa:74:a0 ID:1,9a:b4:f9:aa:74:a0 Lease:0x669f9905}
	I0722 04:52:56.439904    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:7a:ee:70:ee:14:f5 ID:1,7a:ee:70:ee:14:f5 Lease:0x669f986e}
	I0722 04:52:56.439914    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:6e:be:a1:44:5:4b ID:1,6e:be:a1:44:5:4b Lease:0x669f96ec}
	I0722 04:52:56.439922    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:42:1e:5:40:b8:f4 ID:1,42:1e:5:40:b8:f4 Lease:0x669f96aa}
	I0722 04:52:56.439927    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:29:87:f9:fe:99 ID:1,96:29:87:f9:fe:99 Lease:0x669f954f}
	I0722 04:52:56.439935    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:f7:56:da:5e:c ID:1,56:f7:56:da:5e:c Lease:0x669e4387}
	I0722 04:52:56.439944    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:8e:5:f6:92:46:58 ID:1,8e:5:f6:92:46:58 Lease:0x669f94d8}
	I0722 04:52:56.439958    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:7e:32:a8:ed:65:b7 ID:1,7e:32:a8:ed:65:b7 Lease:0x669f94bc}
	I0722 04:52:56.439971    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:2a:2b:92:9c:a2:5a ID:1,2a:2b:92:9c:a2:5a Lease:0x669f94af}
	I0722 04:52:56.439987    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:fa:b2:68:99:cc ID:1,32:fa:b2:68:99:cc Lease:0x669e4324}
	I0722 04:52:56.440000    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:fa:9b:b5:7:d4:be ID:1,fa:9b:b5:7:d4:be Lease:0x669e4323}
	I0722 04:52:56.440010    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:e4:f6:76:b9:c0 ID:1,ae:e4:f6:76:b9:c0 Lease:0x669e42b7}
	I0722 04:52:56.440028    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:4e:81:18:7d:a8:7e ID:1,4e:81:18:7d:a8:7e Lease:0x669f93fd}
	I0722 04:52:56.440038    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:65:82:aa:c5 ID:1,4a:af:65:82:aa:c5 Lease:0x669f93c8}
	I0722 04:52:56.440048    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:12:af:ac:fa:ca:4c ID:1,12:af:ac:fa:ca:4c Lease:0x669f93d2}
	I0722 04:52:56.440056    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:26:4:fc:35:c8:96 ID:1,26:4:fc:35:c8:96 Lease:0x669f9389}
	I0722 04:52:56.440063    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:de:22:65:a4:f5:e ID:1,de:22:65:a4:f5:e Lease:0x669f9319}
	I0722 04:52:56.440074    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b6:c9:f9:a0:af:9a ID:1,b6:c9:f9:a0:af:9a Lease:0x669f92aa}
	I0722 04:52:56.440085    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7a:6:64:ab:c5:b3 ID:1,7a:6:64:ab:c5:b3 Lease:0x669f922f}
	I0722 04:52:56.440095    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:ba:63:18:2b:e1 ID:1,4a:ba:63:18:2b:e1 Lease:0x669e403d}
	I0722 04:52:56.440107    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:5a:35:14:69:f3 ID:1,6a:5a:35:14:69:f3 Lease:0x669e3f9a}
	I0722 04:52:56.440114    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:95:16:da:1f:ab ID:1,fa:95:16:da:1f:ab Lease:0x669f9174}
	I0722 04:52:56.440122    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:c2:21:26:80:b3:ec ID:1,c2:21:26:80:b3:ec Lease:0x669f9136}
	I0722 04:52:56.440129    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:46:5a:9d:73:3c:7b ID:1,46:5a:9d:73:3c:7b Lease:0x669e3d1c}
	I0722 04:52:56.440138    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:aa:fb:72:a3:d5:2a ID:1,aa:fb:72:a3:d5:2a Lease:0x669f8e5d}
	I0722 04:52:56.440145    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:f9:3a:b4:3:19 ID:1,b2:f9:3a:b4:3:19 Lease:0x669f8e35}
	I0722 04:52:56.440150    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d6:39:9d:61:bc:c3 ID:1,d6:39:9d:61:bc:c3 Lease:0x669f8df2}
	I0722 04:52:56.440160    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:f2:a:93:ea:6:d8 ID:1,f2:a:93:ea:6:d8 Lease:0x669e3c67}
	I0722 04:52:56.440170    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:a9:c8:fd:de:59 ID:1,92:a9:c8:fd:de:59 Lease:0x669f8ce7}
	I0722 04:52:56.440179    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8c7f}
	I0722 04:52:56.440195    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 04:52:56.440211    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8c45}
	I0722 04:52:56.440223    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 04:52:56.440231    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:5e:d2:75:4c:13:a8 ID:1,5e:d2:75:4c:13:a8 Lease:0x669f882a}
	I0722 04:52:56.440239    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:9c:b9:4c:57:c9 ID:1,42:9c:b9:4c:57:c9 Lease:0x669f8766}
	I0722 04:52:56.440253    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:5f:b2:6a:58:a7 ID:1,f6:5f:b2:6a:58:a7 Lease:0x669f8605}
	I0722 04:52:58.440079    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Attempt 2
	I0722 04:52:58.440097    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:52:58.440175    7354 main.go:141] libmachine: (embed-certs-781000) DBG | hyperkit pid from json: 7378
	I0722 04:52:58.440960    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Searching for 62:53:74:c7:db:88 in /var/db/dhcpd_leases ...
	I0722 04:52:58.441036    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I0722 04:52:58.441053    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:9a:b4:f9:aa:74:a0 ID:1,9a:b4:f9:aa:74:a0 Lease:0x669f9905}
	I0722 04:52:58.441063    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:7a:ee:70:ee:14:f5 ID:1,7a:ee:70:ee:14:f5 Lease:0x669f986e}
	I0722 04:52:58.441081    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:6e:be:a1:44:5:4b ID:1,6e:be:a1:44:5:4b Lease:0x669f96ec}
	I0722 04:52:58.441089    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:42:1e:5:40:b8:f4 ID:1,42:1e:5:40:b8:f4 Lease:0x669f96aa}
	I0722 04:52:58.441098    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:29:87:f9:fe:99 ID:1,96:29:87:f9:fe:99 Lease:0x669f954f}
	I0722 04:52:58.441108    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:f7:56:da:5e:c ID:1,56:f7:56:da:5e:c Lease:0x669e4387}
	I0722 04:52:58.441116    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:8e:5:f6:92:46:58 ID:1,8e:5:f6:92:46:58 Lease:0x669f94d8}
	I0722 04:52:58.441123    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:7e:32:a8:ed:65:b7 ID:1,7e:32:a8:ed:65:b7 Lease:0x669f94bc}
	I0722 04:52:58.441130    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:2a:2b:92:9c:a2:5a ID:1,2a:2b:92:9c:a2:5a Lease:0x669f94af}
	I0722 04:52:58.441138    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:fa:b2:68:99:cc ID:1,32:fa:b2:68:99:cc Lease:0x669e4324}
	I0722 04:52:58.441147    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:fa:9b:b5:7:d4:be ID:1,fa:9b:b5:7:d4:be Lease:0x669e4323}
	I0722 04:52:58.441154    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:e4:f6:76:b9:c0 ID:1,ae:e4:f6:76:b9:c0 Lease:0x669e42b7}
	I0722 04:52:58.441162    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:4e:81:18:7d:a8:7e ID:1,4e:81:18:7d:a8:7e Lease:0x669f93fd}
	I0722 04:52:58.441171    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:65:82:aa:c5 ID:1,4a:af:65:82:aa:c5 Lease:0x669f93c8}
	I0722 04:52:58.441180    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:12:af:ac:fa:ca:4c ID:1,12:af:ac:fa:ca:4c Lease:0x669f93d2}
	I0722 04:52:58.441187    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:26:4:fc:35:c8:96 ID:1,26:4:fc:35:c8:96 Lease:0x669f9389}
	I0722 04:52:58.441200    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:de:22:65:a4:f5:e ID:1,de:22:65:a4:f5:e Lease:0x669f9319}
	I0722 04:52:58.441208    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b6:c9:f9:a0:af:9a ID:1,b6:c9:f9:a0:af:9a Lease:0x669f92aa}
	I0722 04:52:58.441215    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7a:6:64:ab:c5:b3 ID:1,7a:6:64:ab:c5:b3 Lease:0x669f922f}
	I0722 04:52:58.441221    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:ba:63:18:2b:e1 ID:1,4a:ba:63:18:2b:e1 Lease:0x669e403d}
	I0722 04:52:58.441229    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:5a:35:14:69:f3 ID:1,6a:5a:35:14:69:f3 Lease:0x669e3f9a}
	I0722 04:52:58.441240    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:95:16:da:1f:ab ID:1,fa:95:16:da:1f:ab Lease:0x669f9174}
	I0722 04:52:58.441249    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:c2:21:26:80:b3:ec ID:1,c2:21:26:80:b3:ec Lease:0x669f9136}
	I0722 04:52:58.441256    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:46:5a:9d:73:3c:7b ID:1,46:5a:9d:73:3c:7b Lease:0x669e3d1c}
	I0722 04:52:58.441263    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:aa:fb:72:a3:d5:2a ID:1,aa:fb:72:a3:d5:2a Lease:0x669f8e5d}
	I0722 04:52:58.441270    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:f9:3a:b4:3:19 ID:1,b2:f9:3a:b4:3:19 Lease:0x669f8e35}
	I0722 04:52:58.441277    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d6:39:9d:61:bc:c3 ID:1,d6:39:9d:61:bc:c3 Lease:0x669f8df2}
	I0722 04:52:58.441282    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:f2:a:93:ea:6:d8 ID:1,f2:a:93:ea:6:d8 Lease:0x669e3c67}
	I0722 04:52:58.441290    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:a9:c8:fd:de:59 ID:1,92:a9:c8:fd:de:59 Lease:0x669f8ce7}
	I0722 04:52:58.441297    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8c7f}
	I0722 04:52:58.441303    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 04:52:58.441314    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8c45}
	I0722 04:52:58.441321    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 04:52:58.441329    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:5e:d2:75:4c:13:a8 ID:1,5e:d2:75:4c:13:a8 Lease:0x669f882a}
	I0722 04:52:58.441336    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:9c:b9:4c:57:c9 ID:1,42:9c:b9:4c:57:c9 Lease:0x669f8766}
	I0722 04:52:58.441344    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:5f:b2:6a:58:a7 ID:1,f6:5f:b2:6a:58:a7 Lease:0x669f8605}
	I0722 04:53:00.293997    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:53:00 INFO : hyperkit: stderr: rdmsr to register 0x64d on vcpu 1
	I0722 04:53:00.294022    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:53:00 INFO : hyperkit: stderr: rdmsr to register 0x64e on vcpu 1
	I0722 04:53:00.294032    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:53:00 INFO : hyperkit: stderr: rdmsr to register 0x34 on vcpu 1
	I0722 04:53:00.317724    7354 main.go:141] libmachine: (embed-certs-781000) DBG | 2024/07/22 04:53:00 INFO : hyperkit: stderr: rdmsr to register 0xc0011029 on vcpu 1
	I0722 04:53:00.441662    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Attempt 3
	I0722 04:53:00.441685    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:53:00.441866    7354 main.go:141] libmachine: (embed-certs-781000) DBG | hyperkit pid from json: 7378
	I0722 04:53:00.443326    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Searching for 62:53:74:c7:db:88 in /var/db/dhcpd_leases ...
	I0722 04:53:00.443472    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I0722 04:53:00.443528    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:9a:b4:f9:aa:74:a0 ID:1,9a:b4:f9:aa:74:a0 Lease:0x669f9905}
	I0722 04:53:00.443566    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:7a:ee:70:ee:14:f5 ID:1,7a:ee:70:ee:14:f5 Lease:0x669f986e}
	I0722 04:53:00.443600    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:6e:be:a1:44:5:4b ID:1,6e:be:a1:44:5:4b Lease:0x669f96ec}
	I0722 04:53:00.443620    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:42:1e:5:40:b8:f4 ID:1,42:1e:5:40:b8:f4 Lease:0x669f96aa}
	I0722 04:53:00.443637    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:29:87:f9:fe:99 ID:1,96:29:87:f9:fe:99 Lease:0x669f954f}
	I0722 04:53:00.443648    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:f7:56:da:5e:c ID:1,56:f7:56:da:5e:c Lease:0x669e4387}
	I0722 04:53:00.443661    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:8e:5:f6:92:46:58 ID:1,8e:5:f6:92:46:58 Lease:0x669f94d8}
	I0722 04:53:00.443682    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:7e:32:a8:ed:65:b7 ID:1,7e:32:a8:ed:65:b7 Lease:0x669f94bc}
	I0722 04:53:00.443700    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:2a:2b:92:9c:a2:5a ID:1,2a:2b:92:9c:a2:5a Lease:0x669f94af}
	I0722 04:53:00.443711    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:fa:b2:68:99:cc ID:1,32:fa:b2:68:99:cc Lease:0x669e4324}
	I0722 04:53:00.443720    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:fa:9b:b5:7:d4:be ID:1,fa:9b:b5:7:d4:be Lease:0x669e4323}
	I0722 04:53:00.443754    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:e4:f6:76:b9:c0 ID:1,ae:e4:f6:76:b9:c0 Lease:0x669e42b7}
	I0722 04:53:00.443770    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:4e:81:18:7d:a8:7e ID:1,4e:81:18:7d:a8:7e Lease:0x669f93fd}
	I0722 04:53:00.443780    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:65:82:aa:c5 ID:1,4a:af:65:82:aa:c5 Lease:0x669f93c8}
	I0722 04:53:00.443793    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:12:af:ac:fa:ca:4c ID:1,12:af:ac:fa:ca:4c Lease:0x669f93d2}
	I0722 04:53:00.443804    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:26:4:fc:35:c8:96 ID:1,26:4:fc:35:c8:96 Lease:0x669f9389}
	I0722 04:53:00.443815    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:de:22:65:a4:f5:e ID:1,de:22:65:a4:f5:e Lease:0x669f9319}
	I0722 04:53:00.443824    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b6:c9:f9:a0:af:9a ID:1,b6:c9:f9:a0:af:9a Lease:0x669f92aa}
	I0722 04:53:00.443835    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7a:6:64:ab:c5:b3 ID:1,7a:6:64:ab:c5:b3 Lease:0x669f922f}
	I0722 04:53:00.443843    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:ba:63:18:2b:e1 ID:1,4a:ba:63:18:2b:e1 Lease:0x669e403d}
	I0722 04:53:00.443852    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:5a:35:14:69:f3 ID:1,6a:5a:35:14:69:f3 Lease:0x669e3f9a}
	I0722 04:53:00.443863    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:95:16:da:1f:ab ID:1,fa:95:16:da:1f:ab Lease:0x669f9174}
	I0722 04:53:00.443871    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:c2:21:26:80:b3:ec ID:1,c2:21:26:80:b3:ec Lease:0x669f9136}
	I0722 04:53:00.443880    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:46:5a:9d:73:3c:7b ID:1,46:5a:9d:73:3c:7b Lease:0x669e3d1c}
	I0722 04:53:00.443889    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:aa:fb:72:a3:d5:2a ID:1,aa:fb:72:a3:d5:2a Lease:0x669f8e5d}
	I0722 04:53:00.443904    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:f9:3a:b4:3:19 ID:1,b2:f9:3a:b4:3:19 Lease:0x669f8e35}
	I0722 04:53:00.443933    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d6:39:9d:61:bc:c3 ID:1,d6:39:9d:61:bc:c3 Lease:0x669f8df2}
	I0722 04:53:00.443960    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:f2:a:93:ea:6:d8 ID:1,f2:a:93:ea:6:d8 Lease:0x669e3c67}
	I0722 04:53:00.443975    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:a9:c8:fd:de:59 ID:1,92:a9:c8:fd:de:59 Lease:0x669f8ce7}
	I0722 04:53:00.443986    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8c7f}
	I0722 04:53:00.443998    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 04:53:00.444009    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8c45}
	I0722 04:53:00.444042    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 04:53:00.444060    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:5e:d2:75:4c:13:a8 ID:1,5e:d2:75:4c:13:a8 Lease:0x669f882a}
	I0722 04:53:00.444070    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:9c:b9:4c:57:c9 ID:1,42:9c:b9:4c:57:c9 Lease:0x669f8766}
	I0722 04:53:00.444079    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:5f:b2:6a:58:a7 ID:1,f6:5f:b2:6a:58:a7 Lease:0x669f8605}
	I0722 04:53:02.445729    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Attempt 4
	I0722 04:53:02.445743    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:53:02.445856    7354 main.go:141] libmachine: (embed-certs-781000) DBG | hyperkit pid from json: 7378
	I0722 04:53:02.446649    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Searching for 62:53:74:c7:db:88 in /var/db/dhcpd_leases ...
	I0722 04:53:02.446720    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Found 36 entries in /var/db/dhcpd_leases!
	I0722 04:53:02.446731    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.37 HWAddress:9a:b4:f9:aa:74:a0 ID:1,9a:b4:f9:aa:74:a0 Lease:0x669f9905}
	I0722 04:53:02.446754    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.36 HWAddress:7a:ee:70:ee:14:f5 ID:1,7a:ee:70:ee:14:f5 Lease:0x669f986e}
	I0722 04:53:02.446762    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.35 HWAddress:6e:be:a1:44:5:4b ID:1,6e:be:a1:44:5:4b Lease:0x669f96ec}
	I0722 04:53:02.446778    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.34 HWAddress:42:1e:5:40:b8:f4 ID:1,42:1e:5:40:b8:f4 Lease:0x669f96aa}
	I0722 04:53:02.446785    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.33 HWAddress:96:29:87:f9:fe:99 ID:1,96:29:87:f9:fe:99 Lease:0x669f954f}
	I0722 04:53:02.446792    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.32 HWAddress:56:f7:56:da:5e:c ID:1,56:f7:56:da:5e:c Lease:0x669e4387}
	I0722 04:53:02.446798    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.31 HWAddress:8e:5:f6:92:46:58 ID:1,8e:5:f6:92:46:58 Lease:0x669f94d8}
	I0722 04:53:02.446804    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.30 HWAddress:7e:32:a8:ed:65:b7 ID:1,7e:32:a8:ed:65:b7 Lease:0x669f94bc}
	I0722 04:53:02.446810    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.29 HWAddress:2a:2b:92:9c:a2:5a ID:1,2a:2b:92:9c:a2:5a Lease:0x669f94af}
	I0722 04:53:02.446816    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.28 HWAddress:32:fa:b2:68:99:cc ID:1,32:fa:b2:68:99:cc Lease:0x669e4324}
	I0722 04:53:02.446822    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.27 HWAddress:fa:9b:b5:7:d4:be ID:1,fa:9b:b5:7:d4:be Lease:0x669e4323}
	I0722 04:53:02.446832    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.26 HWAddress:ae:e4:f6:76:b9:c0 ID:1,ae:e4:f6:76:b9:c0 Lease:0x669e42b7}
	I0722 04:53:02.446856    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.25 HWAddress:4e:81:18:7d:a8:7e ID:1,4e:81:18:7d:a8:7e Lease:0x669f93fd}
	I0722 04:53:02.446869    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.24 HWAddress:4a:af:65:82:aa:c5 ID:1,4a:af:65:82:aa:c5 Lease:0x669f93c8}
	I0722 04:53:02.446878    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.23 HWAddress:12:af:ac:fa:ca:4c ID:1,12:af:ac:fa:ca:4c Lease:0x669f93d2}
	I0722 04:53:02.446906    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.22 HWAddress:26:4:fc:35:c8:96 ID:1,26:4:fc:35:c8:96 Lease:0x669f9389}
	I0722 04:53:02.446919    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.21 HWAddress:de:22:65:a4:f5:e ID:1,de:22:65:a4:f5:e Lease:0x669f9319}
	I0722 04:53:02.446928    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.20 HWAddress:b6:c9:f9:a0:af:9a ID:1,b6:c9:f9:a0:af:9a Lease:0x669f92aa}
	I0722 04:53:02.446934    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.19 HWAddress:7a:6:64:ab:c5:b3 ID:1,7a:6:64:ab:c5:b3 Lease:0x669f922f}
	I0722 04:53:02.446941    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.18 HWAddress:4a:ba:63:18:2b:e1 ID:1,4a:ba:63:18:2b:e1 Lease:0x669e403d}
	I0722 04:53:02.446948    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.17 HWAddress:6a:5a:35:14:69:f3 ID:1,6a:5a:35:14:69:f3 Lease:0x669e3f9a}
	I0722 04:53:02.446954    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.16 HWAddress:fa:95:16:da:1f:ab ID:1,fa:95:16:da:1f:ab Lease:0x669f9174}
	I0722 04:53:02.446964    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.15 HWAddress:c2:21:26:80:b3:ec ID:1,c2:21:26:80:b3:ec Lease:0x669f9136}
	I0722 04:53:02.446970    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.14 HWAddress:46:5a:9d:73:3c:7b ID:1,46:5a:9d:73:3c:7b Lease:0x669e3d1c}
	I0722 04:53:02.446983    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.13 HWAddress:aa:fb:72:a3:d5:2a ID:1,aa:fb:72:a3:d5:2a Lease:0x669f8e5d}
	I0722 04:53:02.446990    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.12 HWAddress:b2:f9:3a:b4:3:19 ID:1,b2:f9:3a:b4:3:19 Lease:0x669f8e35}
	I0722 04:53:02.447001    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.11 HWAddress:d6:39:9d:61:bc:c3 ID:1,d6:39:9d:61:bc:c3 Lease:0x669f8df2}
	I0722 04:53:02.447008    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.10 HWAddress:f2:a:93:ea:6:d8 ID:1,f2:a:93:ea:6:d8 Lease:0x669e3c67}
	I0722 04:53:02.447015    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.9 HWAddress:92:a9:c8:fd:de:59 ID:1,92:a9:c8:fd:de:59 Lease:0x669f8ce7}
	I0722 04:53:02.447021    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.8 HWAddress:ca:7d:32:d9:5d:55 ID:1,ca:7d:32:d9:5d:55 Lease:0x669f8c7f}
	I0722 04:53:02.447028    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.7 HWAddress:b6:20:31:8d:e0:89 ID:1,b6:20:31:8d:e0:89 Lease:0x669e3a76}
	I0722 04:53:02.447041    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.6 HWAddress:4e:65:fa:f9:26:3 ID:1,4e:65:fa:f9:26:3 Lease:0x669f8c45}
	I0722 04:53:02.447054    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.5 HWAddress:de:e:68:47:cf:44 ID:1,de:e:68:47:cf:44 Lease:0x669f8c1b}
	I0722 04:53:02.447069    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.4 HWAddress:5e:d2:75:4c:13:a8 ID:1,5e:d2:75:4c:13:a8 Lease:0x669f882a}
	I0722 04:53:02.447078    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.3 HWAddress:42:9c:b9:4c:57:c9 ID:1,42:9c:b9:4c:57:c9 Lease:0x669f8766}
	I0722 04:53:02.447087    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.2 HWAddress:f6:5f:b2:6a:58:a7 ID:1,f6:5f:b2:6a:58:a7 Lease:0x669f8605}
	I0722 04:53:04.448386    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Attempt 5
	I0722 04:53:04.448403    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:53:04.448519    7354 main.go:141] libmachine: (embed-certs-781000) DBG | hyperkit pid from json: 7378
	I0722 04:53:04.449272    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Searching for 62:53:74:c7:db:88 in /var/db/dhcpd_leases ...
	I0722 04:53:04.449343    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Found 37 entries in /var/db/dhcpd_leases!
	I0722 04:53:04.449356    7354 main.go:141] libmachine: (embed-certs-781000) DBG | dhcp entry: {Name:minikube IPAddress:192.169.0.38 HWAddress:62:53:74:c7:db:88 ID:1,62:53:74:c7:db:88 Lease:0x669f999f}
	I0722 04:53:04.449367    7354 main.go:141] libmachine: (embed-certs-781000) DBG | Found match: 62:53:74:c7:db:88
	I0722 04:53:04.449382    7354 main.go:141] libmachine: (embed-certs-781000) DBG | IP: 192.169.0.38
	I0722 04:53:04.449430    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetConfigRaw
	I0722 04:53:04.450059    7354 main.go:141] libmachine: (embed-certs-781000) Calling .DriverName
	I0722 04:53:04.450196    7354 main.go:141] libmachine: (embed-certs-781000) Calling .DriverName
	I0722 04:53:04.450326    7354 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0722 04:53:04.450338    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetState
	I0722 04:53:04.450437    7354 main.go:141] libmachine: (embed-certs-781000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:53:04.450503    7354 main.go:141] libmachine: (embed-certs-781000) DBG | hyperkit pid from json: 7378
	I0722 04:53:04.451298    7354 main.go:141] libmachine: Detecting operating system of created instance...
	I0722 04:53:04.451322    7354 main.go:141] libmachine: Waiting for SSH to be available...
	I0722 04:53:04.451331    7354 main.go:141] libmachine: Getting to WaitForSSH function...
	I0722 04:53:04.451336    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:04.451431    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:04.451531    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:04.451642    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:04.451738    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:04.452203    7354 main.go:141] libmachine: Using SSH client type: native
	I0722 04:53:04.452400    7354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf05e0c0] 0xf060e20 <nil>  [] 0s} 192.169.0.38 22 <nil> <nil>}
	I0722 04:53:04.452407    7354 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0722 04:53:05.508195    7354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 04:53:05.508215    7354 main.go:141] libmachine: Detecting the provisioner...
	I0722 04:53:05.508222    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:05.508361    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:05.508464    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.508569    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.508653    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:05.508787    7354 main.go:141] libmachine: Using SSH client type: native
	I0722 04:53:05.508932    7354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf05e0c0] 0xf060e20 <nil>  [] 0s} 192.169.0.38 22 <nil> <nil>}
	I0722 04:53:05.508940    7354 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0722 04:53:05.562241    7354 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0722 04:53:05.562304    7354 main.go:141] libmachine: found compatible host: buildroot
	I0722 04:53:05.562318    7354 main.go:141] libmachine: Provisioning with buildroot...
	I0722 04:53:05.562330    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetMachineName
	I0722 04:53:05.562462    7354 buildroot.go:166] provisioning hostname "embed-certs-781000"
	I0722 04:53:05.562473    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetMachineName
	I0722 04:53:05.562557    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:05.562650    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:05.562734    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.562823    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.562905    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:05.563023    7354 main.go:141] libmachine: Using SSH client type: native
	I0722 04:53:05.563169    7354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf05e0c0] 0xf060e20 <nil>  [] 0s} 192.169.0.38 22 <nil> <nil>}
	I0722 04:53:05.563178    7354 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-781000 && echo "embed-certs-781000" | sudo tee /etc/hostname
	I0722 04:53:05.626198    7354 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-781000
	
	I0722 04:53:05.626215    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:05.626350    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:05.626444    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.626535    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.626623    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:05.626753    7354 main.go:141] libmachine: Using SSH client type: native
	I0722 04:53:05.626902    7354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf05e0c0] 0xf060e20 <nil>  [] 0s} 192.169.0.38 22 <nil> <nil>}
	I0722 04:53:05.626914    7354 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-781000' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-781000/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-781000' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0722 04:53:05.685962    7354 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0722 04:53:05.685986    7354 buildroot.go:172] set auth options {CertDir:/Users/jenkins/minikube-integration/19313-1111/.minikube CaCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem CaPrivateKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ServerKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem ClientKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/Users/jenkins/minikube-integration/19313-1111/.minikube}
	I0722 04:53:05.686002    7354 buildroot.go:174] setting up certificates
	I0722 04:53:05.686011    7354 provision.go:84] configureAuth start
	I0722 04:53:05.686018    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetMachineName
	I0722 04:53:05.686154    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetIP
	I0722 04:53:05.686251    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:05.686355    7354 provision.go:143] copyHostCerts
	I0722 04:53:05.686451    7354 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem, removing ...
	I0722 04:53:05.686463    7354 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem
	I0722 04:53:05.686638    7354 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/ca.pem (1078 bytes)
	I0722 04:53:05.686896    7354 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem, removing ...
	I0722 04:53:05.686903    7354 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem
	I0722 04:53:05.686996    7354 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/cert.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/cert.pem (1123 bytes)
	I0722 04:53:05.687198    7354 exec_runner.go:144] found /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem, removing ...
	I0722 04:53:05.687204    7354 exec_runner.go:203] rm: /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem
	I0722 04:53:05.687288    7354 exec_runner.go:151] cp: /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/key.pem --> /Users/jenkins/minikube-integration/19313-1111/.minikube/key.pem (1675 bytes)
	I0722 04:53:05.687453    7354 provision.go:117] generating server cert: /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem ca-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem private-key=/Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca-key.pem org=jenkins.embed-certs-781000 san=[127.0.0.1 192.169.0.38 embed-certs-781000 localhost minikube]
	I0722 04:53:05.746057    7354 provision.go:177] copyRemoteCerts
	I0722 04:53:05.746119    7354 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0722 04:53:05.746134    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:05.746247    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:05.746344    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.746437    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:05.746520    7354 sshutil.go:53] new ssh client: &{IP:192.169.0.38 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/id_rsa Username:docker}
	I0722 04:53:05.780848    7354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0722 04:53:05.800244    7354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0722 04:53:05.819501    7354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1078 bytes)
	I0722 04:53:05.838942    7354 provision.go:87] duration metric: took 152.919322ms to configureAuth
	I0722 04:53:05.838953    7354 buildroot.go:189] setting minikube options for container-runtime
	I0722 04:53:05.839077    7354 config.go:182] Loaded profile config "embed-certs-781000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:53:05.839090    7354 main.go:141] libmachine: (embed-certs-781000) Calling .DriverName
	I0722 04:53:05.839218    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:05.839339    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:05.839442    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.839535    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.839614    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:05.839723    7354 main.go:141] libmachine: Using SSH client type: native
	I0722 04:53:05.839847    7354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf05e0c0] 0xf060e20 <nil>  [] 0s} 192.169.0.38 22 <nil> <nil>}
	I0722 04:53:05.839855    7354 main.go:141] libmachine: About to run SSH command:
	df --output=fstype / | tail -n 1
	I0722 04:53:05.894285    7354 main.go:141] libmachine: SSH cmd err, output: <nil>: tmpfs
	
	I0722 04:53:05.894297    7354 buildroot.go:70] root file system type: tmpfs
	I0722 04:53:05.894381    7354 provision.go:314] Updating docker unit: /lib/systemd/system/docker.service ...
	I0722 04:53:05.894393    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:05.894545    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:05.894648    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.894729    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.894828    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:05.894956    7354 main.go:141] libmachine: Using SSH client type: native
	I0722 04:53:05.895093    7354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf05e0c0] 0xf060e20 <nil>  [] 0s} 192.169.0.38 22 <nil> <nil>}
	I0722 04:53:05.895134    7354 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /lib/systemd/system && printf %s "[Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP \$MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	" | sudo tee /lib/systemd/system/docker.service.new
	I0722 04:53:05.958766    7354 main.go:141] libmachine: SSH cmd err, output: <nil>: [Unit]
	Description=Docker Application Container Engine
	Documentation=https://docs.docker.com
	After=network.target  minikube-automount.service docker.socket
	Requires= minikube-automount.service docker.socket 
	StartLimitBurst=3
	StartLimitIntervalSec=60
	
	[Service]
	Type=notify
	Restart=on-failure
	
	
	
	# This file is a systemd drop-in unit that inherits from the base dockerd configuration.
	# The base configuration already specifies an 'ExecStart=...' command. The first directive
	# here is to clear out that command inherited from the base configuration. Without this,
	# the command from the base configuration and the command specified here are treated as
	# a sequence of commands, which is not the desired behavior, nor is it valid -- systemd
	# will catch this invalid input and refuse to start the service with an error like:
	#  Service has more than one ExecStart= setting, which is only allowed for Type=oneshot services.
	
	# NOTE: default-ulimit=nofile is set to an arbitrary number for consistency with other
	# container runtimes. If left unlimited, it may result in OOM issues with MySQL.
	ExecStart=
	ExecStart=/usr/bin/dockerd -H tcp://0.0.0.0:2376 -H unix:///var/run/docker.sock --default-ulimit=nofile=1048576:1048576 --tlsverify --tlscacert /etc/docker/ca.pem --tlscert /etc/docker/server.pem --tlskey /etc/docker/server-key.pem --label provider=hyperkit --insecure-registry 10.96.0.0/12 
	ExecReload=/bin/kill -s HUP $MAINPID
	
	# Having non-zero Limit*s causes performance problems due to accounting overhead
	# in the kernel. We recommend using cgroups to do container-local accounting.
	LimitNOFILE=infinity
	LimitNPROC=infinity
	LimitCORE=infinity
	
	# Uncomment TasksMax if your systemd version supports it.
	# Only systemd 226 and above support this version.
	TasksMax=infinity
	TimeoutStartSec=0
	
	# set delegate yes so that systemd does not reset the cgroups of docker containers
	Delegate=yes
	
	# kill only the docker process, not all processes in the cgroup
	KillMode=process
	
	[Install]
	WantedBy=multi-user.target
	
	I0722 04:53:05.958792    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:05.958947    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:05.959053    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.959171    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:05.959269    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:05.959412    7354 main.go:141] libmachine: Using SSH client type: native
	I0722 04:53:05.959559    7354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf05e0c0] 0xf060e20 <nil>  [] 0s} 192.169.0.38 22 <nil> <nil>}
	I0722 04:53:05.959572    7354 main.go:141] libmachine: About to run SSH command:
	sudo diff -u /lib/systemd/system/docker.service /lib/systemd/system/docker.service.new || { sudo mv /lib/systemd/system/docker.service.new /lib/systemd/system/docker.service; sudo systemctl -f daemon-reload && sudo systemctl -f enable docker && sudo systemctl -f restart docker; }
	I0722 04:53:07.507409    7354 main.go:141] libmachine: SSH cmd err, output: <nil>: diff: can't stat '/lib/systemd/system/docker.service': No such file or directory
	Created symlink /etc/systemd/system/multi-user.target.wants/docker.service → /usr/lib/systemd/system/docker.service.
	
	I0722 04:53:07.507424    7354 main.go:141] libmachine: Checking connection to Docker...
	I0722 04:53:07.507430    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetURL
	I0722 04:53:07.507565    7354 main.go:141] libmachine: Docker is up and running!
	I0722 04:53:07.507573    7354 main.go:141] libmachine: Reticulating splines...
	I0722 04:53:07.507578    7354 client.go:171] duration metric: took 13.817791852s to LocalClient.Create
	I0722 04:53:07.507589    7354 start.go:167] duration metric: took 13.817836773s to libmachine.API.Create "embed-certs-781000"
	I0722 04:53:07.507600    7354 start.go:293] postStartSetup for "embed-certs-781000" (driver="hyperkit")
	I0722 04:53:07.507608    7354 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0722 04:53:07.507618    7354 main.go:141] libmachine: (embed-certs-781000) Calling .DriverName
	I0722 04:53:07.507764    7354 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0722 04:53:07.507776    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:07.507876    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:07.507959    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:07.508057    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:07.508148    7354 sshutil.go:53] new ssh client: &{IP:192.169.0.38 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/id_rsa Username:docker}
	I0722 04:53:07.540501    7354 ssh_runner.go:195] Run: cat /etc/os-release
	I0722 04:53:07.543721    7354 info.go:137] Remote host: Buildroot 2023.02.9
	I0722 04:53:07.543734    7354 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/addons for local assets ...
	I0722 04:53:07.543856    7354 filesync.go:126] Scanning /Users/jenkins/minikube-integration/19313-1111/.minikube/files for local assets ...
	I0722 04:53:07.544046    7354 filesync.go:149] local asset: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem -> 16372.pem in /etc/ssl/certs
	I0722 04:53:07.544263    7354 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0722 04:53:07.551300    7354 ssh_runner.go:362] scp /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/ssl/certs/16372.pem --> /etc/ssl/certs/16372.pem (1708 bytes)
	I0722 04:53:07.570835    7354 start.go:296] duration metric: took 63.227905ms for postStartSetup
	I0722 04:53:07.570876    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetConfigRaw
	I0722 04:53:07.571475    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetIP
	I0722 04:53:07.571637    7354 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/embed-certs-781000/config.json ...
	I0722 04:53:07.571979    7354 start.go:128] duration metric: took 13.95283332s to createHost
	I0722 04:53:07.572001    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:07.572093    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:07.572180    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:07.572266    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:07.572348    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:07.572463    7354 main.go:141] libmachine: Using SSH client type: native
	I0722 04:53:07.572621    7354 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0xf05e0c0] 0xf060e20 <nil>  [] 0s} 192.169.0.38 22 <nil> <nil>}
	I0722 04:53:07.572629    7354 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0722 04:53:07.626064    7354 main.go:141] libmachine: SSH cmd err, output: <nil>: 1721649187.749500700
	
	I0722 04:53:07.626075    7354 fix.go:216] guest clock: 1721649187.749500700
	I0722 04:53:07.626081    7354 fix.go:229] Guest: 2024-07-22 04:53:07.7495007 -0700 PDT Remote: 2024-07-22 04:53:07.571988 -0700 PDT m=+14.569572486 (delta=177.5127ms)
	I0722 04:53:07.626098    7354 fix.go:200] guest clock delta is within tolerance: 177.5127ms
	I0722 04:53:07.626103    7354 start.go:83] releasing machines lock for "embed-certs-781000", held for 14.007030113s
	I0722 04:53:07.626122    7354 main.go:141] libmachine: (embed-certs-781000) Calling .DriverName
	I0722 04:53:07.626274    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetIP
	I0722 04:53:07.626374    7354 main.go:141] libmachine: (embed-certs-781000) Calling .DriverName
	I0722 04:53:07.626673    7354 main.go:141] libmachine: (embed-certs-781000) Calling .DriverName
	I0722 04:53:07.626797    7354 main.go:141] libmachine: (embed-certs-781000) Calling .DriverName
	I0722 04:53:07.626877    7354 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0722 04:53:07.626913    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:07.626940    7354 ssh_runner.go:195] Run: cat /version.json
	I0722 04:53:07.626958    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHHostname
	I0722 04:53:07.627028    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:07.627067    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHPort
	I0722 04:53:07.627146    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:07.627175    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHKeyPath
	I0722 04:53:07.627259    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:07.627293    7354 main.go:141] libmachine: (embed-certs-781000) Calling .GetSSHUsername
	I0722 04:53:07.627345    7354 sshutil.go:53] new ssh client: &{IP:192.169.0.38 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/id_rsa Username:docker}
	I0722 04:53:07.627379    7354 sshutil.go:53] new ssh client: &{IP:192.169.0.38 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/embed-certs-781000/id_rsa Username:docker}
	I0722 04:53:07.705188    7354 ssh_runner.go:195] Run: systemctl --version
	I0722 04:53:07.709841    7354 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0722 04:53:07.714365    7354 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0722 04:53:07.714430    7354 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0722 04:53:07.727254    7354 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0722 04:53:07.727270    7354 start.go:495] detecting cgroup driver to use...
	I0722 04:53:07.727381    7354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///run/containerd/containerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:53:07.743170    7354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)sandbox_image = .*$|\1sandbox_image = "registry.k8s.io/pause:3.9"|' /etc/containerd/config.toml"
	I0722 04:53:07.752278    7354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)restrict_oom_score_adj = .*$|\1restrict_oom_score_adj = false|' /etc/containerd/config.toml"
	I0722 04:53:07.761143    7354 containerd.go:146] configuring containerd to use "cgroupfs" as cgroup driver...
	I0722 04:53:07.761204    7354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)SystemdCgroup = .*$|\1SystemdCgroup = false|g' /etc/containerd/config.toml"
	I0722 04:53:07.770026    7354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runtime.v1.linux"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:53:07.779164    7354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/systemd_cgroup/d' /etc/containerd/config.toml"
	I0722 04:53:07.788290    7354 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|"io.containerd.runc.v1"|"io.containerd.runc.v2"|g' /etc/containerd/config.toml"
	I0722 04:53:07.796983    7354 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0722 04:53:07.806164    7354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)conf_dir = .*$|\1conf_dir = "/etc/cni/net.d"|g' /etc/containerd/config.toml"
	I0722 04:53:07.815122    7354 ssh_runner.go:195] Run: sh -c "sudo sed -i '/^ *enable_unprivileged_ports = .*/d' /etc/containerd/config.toml"
	I0722 04:53:07.824216    7354 ssh_runner.go:195] Run: sh -c "sudo sed -i -r 's|^( *)\[plugins."io.containerd.grpc.v1.cri"\]|&\n\1  enable_unprivileged_ports = true|' /etc/containerd/config.toml"
	I0722 04:53:07.833136    7354 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0722 04:53:07.841138    7354 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0722 04:53:07.849013    7354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:53:07.946502    7354 ssh_runner.go:195] Run: sudo systemctl restart containerd
	I0722 04:53:07.966644    7354 start.go:495] detecting cgroup driver to use...
	I0722 04:53:07.966724    7354 ssh_runner.go:195] Run: sudo systemctl cat docker.service
	I0722 04:53:07.988658    7354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:53:08.001429    7354 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0722 04:53:08.022195    7354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0722 04:53:08.034010    7354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 04:53:08.045617    7354 ssh_runner.go:195] Run: sudo systemctl stop -f crio
	I0722 04:53:08.080669    7354 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service crio
	I0722 04:53:08.091765    7354 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/cri-dockerd.sock
	" | sudo tee /etc/crictl.yaml"
	I0722 04:53:08.107648    7354 ssh_runner.go:195] Run: which cri-dockerd
	I0722 04:53:08.110813    7354 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/cri-docker.service.d
	I0722 04:53:08.118443    7354 ssh_runner.go:362] scp memory --> /etc/systemd/system/cri-docker.service.d/10-cni.conf (189 bytes)
	I0722 04:53:08.132590    7354 ssh_runner.go:195] Run: sudo systemctl unmask docker.service
	I0722 04:53:08.232442    7354 ssh_runner.go:195] Run: sudo systemctl enable docker.socket
	I0722 04:53:08.363524    7354 docker.go:574] configuring docker to use "cgroupfs" as cgroup driver...
	I0722 04:53:08.363627    7354 ssh_runner.go:362] scp memory --> /etc/docker/daemon.json (130 bytes)
	I0722 04:53:08.382737    7354 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0722 04:53:08.495198    7354 ssh_runner.go:195] Run: sudo systemctl restart docker
	I0722 04:54:09.516752    7354 ssh_runner.go:235] Completed: sudo systemctl restart docker: (1m1.022015291s)
	I0722 04:54:09.516837    7354 ssh_runner.go:195] Run: sudo journalctl --no-pager -u docker
	I0722 04:54:09.552145    7354 out.go:177] 
	W0722 04:54:09.572753    7354 out.go:239] X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 11:53:06 embed-certs-781000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:53:06 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:06.391089842Z" level=info msg="Starting up"
	Jul 22 11:53:06 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:06.391540258Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:53:06 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:06.392143630Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=530
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.409668428Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424253520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424314306Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424373237Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424407740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424482015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424518631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424657051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424700065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424731234Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424759543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424839815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.425063497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426602040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426655334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426787538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426879833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426976223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.427047006Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430099877Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430159092Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430195112Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430227921Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430259274Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430346806Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430495622Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430590118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430635023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430669927Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430706630Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430740588Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430769791Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430799778Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430829684Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430859495Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430892100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430932743Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430970148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431001612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431086209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431121284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431158710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431198728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431240417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431273524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431303151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431333340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431364080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431392572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431423931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431454915Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431489306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431520550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431555514Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431606998Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431641160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431670427Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431698773Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431726226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431758556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431789342Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431966908Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.432028410Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.432080155Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.432121150Z" level=info msg="containerd successfully booted in 0.023135s"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.416816622Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.420701268Z" level=info msg="Loading containers: start."
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.509000407Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.592749703Z" level=info msg="Loading containers: done."
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.602986262Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.603072824Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.629639796Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.629812064Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:53:07 embed-certs-781000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.630945900Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:53:08 embed-certs-781000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.632266260Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.632433621Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.632542416Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.632612947Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:53:09 embed-certs-781000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:53:09 embed-certs-781000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:53:09 embed-certs-781000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:53:09 embed-certs-781000 dockerd[868]: time="2024-07-22T11:53:09.666565882Z" level=info msg="Starting up"
	Jul 22 11:54:09 embed-certs-781000 dockerd[868]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 11:54:09 embed-certs-781000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 11:54:09 embed-certs-781000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 11:54:09 embed-certs-781000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	X Exiting due to RUNTIME_ENABLE: Failed to enable container runtime: sudo systemctl restart docker: Process exited with status 1
	stdout:
	
	stderr:
	Job for docker.service failed because the control process exited with error code.
	See "systemctl status docker.service" and "journalctl -xeu docker.service" for details.
	
	sudo journalctl --no-pager -u docker:
	-- stdout --
	Jul 22 11:53:06 embed-certs-781000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:53:06 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:06.391089842Z" level=info msg="Starting up"
	Jul 22 11:53:06 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:06.391540258Z" level=info msg="containerd not running, starting managed containerd"
	Jul 22 11:53:06 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:06.392143630Z" level=info msg="started new containerd process" address=/var/run/docker/containerd/containerd.sock module=libcontainerd pid=530
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.409668428Z" level=info msg="starting containerd" revision=8fc6bcff51318944179630522a095cc9dbf9f353 version=v1.7.20
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424253520Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424314306Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424373237Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424407740Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424482015Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424518631Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424657051Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424700065Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424731234Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424759543Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.424839815Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.425063497Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426602040Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/5.10.207\\n\"): skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426655334Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426787538Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426879833Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.426976223Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.427047006Z" level=info msg="metadata content store policy set" policy=shared
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430099877Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430159092Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430195112Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430227921Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430259274Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430346806Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430495622Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430590118Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430635023Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430669927Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430706630Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430740588Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430769791Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430799778Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430829684Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430859495Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430892100Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430932743Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.430970148Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431001612Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431086209Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431121284Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431158710Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431198728Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431240417Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431273524Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431303151Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431333340Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431364080Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431392572Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431423931Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431454915Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431489306Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431520550Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431555514Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431606998Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431641160Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431670427Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431698773Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431726226Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431758556Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431789342Z" level=info msg="NRI interface is disabled by configuration."
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.431966908Z" level=info msg=serving... address=/var/run/docker/containerd/containerd-debug.sock
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.432028410Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock.ttrpc
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.432080155Z" level=info msg=serving... address=/var/run/docker/containerd/containerd.sock
	Jul 22 11:53:06 embed-certs-781000 dockerd[530]: time="2024-07-22T11:53:06.432121150Z" level=info msg="containerd successfully booted in 0.023135s"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.416816622Z" level=info msg="[graphdriver] trying configured driver: overlay2"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.420701268Z" level=info msg="Loading containers: start."
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.509000407Z" level=warning msg="ip6tables is enabled, but cannot set up ip6tables chains" error="failed to create NAT chain DOCKER: iptables failed: ip6tables --wait -t nat -N DOCKER: ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)\nPerhaps ip6tables or your kernel needs to be upgraded.\n (exit status 3)"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.592749703Z" level=info msg="Loading containers: done."
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.602986262Z" level=info msg="Docker daemon" commit=662f78c containerd-snapshotter=false storage-driver=overlay2 version=27.0.3
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.603072824Z" level=info msg="Daemon has completed initialization"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.629639796Z" level=info msg="API listen on /var/run/docker.sock"
	Jul 22 11:53:07 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:07.629812064Z" level=info msg="API listen on [::]:2376"
	Jul 22 11:53:07 embed-certs-781000 systemd[1]: Started Docker Application Container Engine.
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.630945900Z" level=info msg="Processing signal 'terminated'"
	Jul 22 11:53:08 embed-certs-781000 systemd[1]: Stopping Docker Application Container Engine...
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.632266260Z" level=info msg="stopping event stream following graceful shutdown" error="<nil>" module=libcontainerd namespace=moby
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.632433621Z" level=info msg="Daemon shutdown complete"
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.632542416Z" level=info msg="stopping event stream following graceful shutdown" error="context canceled" module=libcontainerd namespace=plugins.moby
	Jul 22 11:53:08 embed-certs-781000 dockerd[524]: time="2024-07-22T11:53:08.632612947Z" level=info msg="stopping healthcheck following graceful shutdown" module=libcontainerd
	Jul 22 11:53:09 embed-certs-781000 systemd[1]: docker.service: Deactivated successfully.
	Jul 22 11:53:09 embed-certs-781000 systemd[1]: Stopped Docker Application Container Engine.
	Jul 22 11:53:09 embed-certs-781000 systemd[1]: Starting Docker Application Container Engine...
	Jul 22 11:53:09 embed-certs-781000 dockerd[868]: time="2024-07-22T11:53:09.666565882Z" level=info msg="Starting up"
	Jul 22 11:54:09 embed-certs-781000 dockerd[868]: failed to start daemon: failed to dial "/run/containerd/containerd.sock": failed to dial "/run/containerd/containerd.sock": context deadline exceeded
	Jul 22 11:54:09 embed-certs-781000 systemd[1]: docker.service: Main process exited, code=exited, status=1/FAILURE
	Jul 22 11:54:09 embed-certs-781000 systemd[1]: docker.service: Failed with result 'exit-code'.
	Jul 22 11:54:09 embed-certs-781000 systemd[1]: Failed to start Docker Application Container Engine.
	
	-- /stdout --
	W0722 04:54:09.572886    7354 out.go:239] * 
	* 
	W0722 04:54:09.573954    7354 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0722 04:54:09.632022    7354 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-darwin-amd64 start -p embed-certs-781000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3": exit status 90
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000: exit status 6 (148.982558ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 04:54:09.890842    7419 status.go:417] kubeconfig endpoint: get endpoint: "embed-certs-781000" does not appear in /Users/jenkins/minikube-integration/19313-1111/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-781000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/embed-certs/serial/FirstStart (76.92s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-781000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context embed-certs-781000 create -f testdata/busybox.yaml: exit status 1 (37.446652ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-781000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context embed-certs-781000 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000: exit status 6 (144.436121ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 04:54:10.073377    7425 status.go:417] kubeconfig endpoint: get endpoint: "embed-certs-781000" does not appear in /Users/jenkins/minikube-integration/19313-1111/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-781000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000: exit status 6 (148.695588ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 04:54:10.222233    7430 status.go:417] kubeconfig endpoint: get endpoint: "embed-certs-781000" does not appear in /Users/jenkins/minikube-integration/19313-1111/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-781000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/embed-certs/serial/DeployApp (0.33s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (59.95s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-781000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-781000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 11 (59.76996296s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: docker: docker ps --filter status=paused --filter=name=k8s_.*_(kube-system)_ --format=<no value>: Process exited with status 1
	stdout:
	
	stderr:
	Cannot connect to the Docker daemon at unix:///var/run/docker.sock. Is the docker daemon running?
	
	* 
	╭───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                           │
	│    * If the above advice does not help, please let us know:                                                               │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                             │
	│                                                                                                                           │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                  │
	│    * Please also attach the following file to the GitHub issue:                                                           │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_0.log    │
	│                                                                                                                           │
	╰───────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-darwin-amd64 addons enable metrics-server -p embed-certs-781000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 11
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-781000 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context embed-certs-781000 describe deploy/metrics-server -n kube-system: exit status 1 (37.806706ms)

                                                
                                                
** stderr ** 
	error: context "embed-certs-781000" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context embed-certs-781000 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000: exit status 6 (142.722059ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0722 04:55:10.173518    7649 status.go:417] kubeconfig endpoint: get endpoint: "embed-certs-781000" does not appear in /Users/jenkins/minikube-integration/19313-1111/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "embed-certs-781000" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (59.95s)

                                                
                                    

Test pass (313/343)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 23.93
4 TestDownloadOnly/v1.20.0/preload-exists 0
7 TestDownloadOnly/v1.20.0/kubectl 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.29
9 TestDownloadOnly/v1.20.0/DeleteAll 0.23
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.21
12 TestDownloadOnly/v1.30.3/json-events 10.13
13 TestDownloadOnly/v1.30.3/preload-exists 0
16 TestDownloadOnly/v1.30.3/kubectl 0
17 TestDownloadOnly/v1.30.3/LogsDuration 0.31
18 TestDownloadOnly/v1.30.3/DeleteAll 0.23
19 TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds 0.21
21 TestDownloadOnly/v1.31.0-beta.0/json-events 16.23
22 TestDownloadOnly/v1.31.0-beta.0/preload-exists 0
25 TestDownloadOnly/v1.31.0-beta.0/kubectl 0
26 TestDownloadOnly/v1.31.0-beta.0/LogsDuration 0.29
27 TestDownloadOnly/v1.31.0-beta.0/DeleteAll 0.23
28 TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds 0.21
30 TestBinaryMirror 0.93
31 TestOffline 61.34
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.21
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.19
36 TestAddons/Setup 211.75
38 TestAddons/parallel/Registry 14.53
39 TestAddons/parallel/Ingress 19.15
40 TestAddons/parallel/InspektorGadget 10.54
41 TestAddons/parallel/MetricsServer 5.47
42 TestAddons/parallel/HelmTiller 10.16
44 TestAddons/parallel/CSI 50.57
45 TestAddons/parallel/Headlamp 12.92
46 TestAddons/parallel/CloudSpanner 5.36
47 TestAddons/parallel/LocalPath 58.43
48 TestAddons/parallel/NvidiaDevicePlugin 5.35
49 TestAddons/parallel/Yakd 5.01
50 TestAddons/parallel/Volcano 39.21
53 TestAddons/serial/GCPAuth/Namespaces 0.09
54 TestAddons/StoppedEnableDisable 5.93
55 TestCertOptions 42.77
57 TestDockerFlags 55.54
58 TestForceSystemdFlag 41.56
59 TestForceSystemdEnv 44.57
62 TestHyperKitDriverInstallOrUpdate 8.89
65 TestErrorSpam/setup 35.46
66 TestErrorSpam/start 1.6
67 TestErrorSpam/status 0.49
68 TestErrorSpam/pause 1.29
69 TestErrorSpam/unpause 1.36
70 TestErrorSpam/stop 155.83
73 TestFunctional/serial/CopySyncFile 0
74 TestFunctional/serial/StartWithProxy 55.49
75 TestFunctional/serial/AuditLog 0
76 TestFunctional/serial/SoftStart 39.97
77 TestFunctional/serial/KubeContext 0.04
78 TestFunctional/serial/KubectlGetPods 0.07
81 TestFunctional/serial/CacheCmd/cache/add_remote 3.13
82 TestFunctional/serial/CacheCmd/cache/add_local 1.36
83 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.08
84 TestFunctional/serial/CacheCmd/cache/list 0.08
85 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.17
86 TestFunctional/serial/CacheCmd/cache/cache_reload 1.08
87 TestFunctional/serial/CacheCmd/cache/delete 0.16
88 TestFunctional/serial/MinikubeKubectlCmd 1.18
89 TestFunctional/serial/MinikubeKubectlCmdDirectly 1.49
90 TestFunctional/serial/ExtraConfig 59.97
91 TestFunctional/serial/ComponentHealth 0.05
92 TestFunctional/serial/LogsCmd 2.71
93 TestFunctional/serial/LogsFileCmd 2.72
94 TestFunctional/serial/InvalidService 4.6
96 TestFunctional/parallel/ConfigCmd 0.5
97 TestFunctional/parallel/DashboardCmd 11.36
98 TestFunctional/parallel/DryRun 1.01
99 TestFunctional/parallel/InternationalLanguage 0.46
100 TestFunctional/parallel/StatusCmd 0.5
104 TestFunctional/parallel/ServiceCmdConnect 8.55
105 TestFunctional/parallel/AddonsCmd 0.22
106 TestFunctional/parallel/PersistentVolumeClaim 29.61
108 TestFunctional/parallel/SSHCmd 0.29
109 TestFunctional/parallel/CpCmd 1.08
110 TestFunctional/parallel/MySQL 24.3
111 TestFunctional/parallel/FileSync 0.22
112 TestFunctional/parallel/CertSync 1.08
116 TestFunctional/parallel/NodeLabels 0.05
118 TestFunctional/parallel/NonActiveRuntimeDisabled 0.16
120 TestFunctional/parallel/License 0.49
121 TestFunctional/parallel/Version/short 0.1
122 TestFunctional/parallel/Version/components 0.49
123 TestFunctional/parallel/ImageCommands/ImageListShort 0.16
124 TestFunctional/parallel/ImageCommands/ImageListTable 0.16
125 TestFunctional/parallel/ImageCommands/ImageListJson 0.18
126 TestFunctional/parallel/ImageCommands/ImageListYaml 0.2
127 TestFunctional/parallel/ImageCommands/ImageBuild 1.87
128 TestFunctional/parallel/ImageCommands/Setup 1.73
129 TestFunctional/parallel/DockerEnv/bash 0.63
130 TestFunctional/parallel/UpdateContextCmd/no_changes 0.26
131 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.21
132 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.22
133 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 1
134 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 0.66
135 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 1.44
136 TestFunctional/parallel/ImageCommands/ImageSaveToFile 0.34
137 TestFunctional/parallel/ImageCommands/ImageRemove 0.42
138 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 0.79
139 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 0.41
140 TestFunctional/parallel/ServiceCmd/DeployApp 20.12
142 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.38
143 TestFunctional/parallel/ServiceCmd/List 0.18
144 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.02
146 TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup 9.14
147 TestFunctional/parallel/ServiceCmd/JSONOutput 0.39
148 TestFunctional/parallel/ServiceCmd/HTTPS 0.25
149 TestFunctional/parallel/ServiceCmd/Format 0.24
150 TestFunctional/parallel/ServiceCmd/URL 0.25
151 TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP 0.05
152 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.02
153 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.04
154 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.03
155 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.02
156 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.14
157 TestFunctional/parallel/ProfileCmd/profile_not_create 0.25
158 TestFunctional/parallel/ProfileCmd/profile_list 0.26
159 TestFunctional/parallel/ProfileCmd/profile_json_output 0.26
160 TestFunctional/parallel/MountCmd/any-port 6.19
161 TestFunctional/parallel/MountCmd/specific-port 1.32
162 TestFunctional/parallel/MountCmd/VerifyCleanup 1.73
163 TestFunctional/delete_echo-server_images 0.04
164 TestFunctional/delete_my-image_image 0.02
165 TestFunctional/delete_minikube_cached_images 0.02
169 TestMultiControlPlane/serial/StartCluster 203.83
170 TestMultiControlPlane/serial/DeployApp 4.85
171 TestMultiControlPlane/serial/PingHostFromPods 1.29
172 TestMultiControlPlane/serial/AddWorkerNode 164.26
173 TestMultiControlPlane/serial/NodeLabels 0.05
174 TestMultiControlPlane/serial/HAppyAfterClusterStart 0.35
175 TestMultiControlPlane/serial/CopyFile 9.23
176 TestMultiControlPlane/serial/StopSecondaryNode 8.7
177 TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop 0.27
178 TestMultiControlPlane/serial/RestartSecondaryNode 36.95
179 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.34
180 TestMultiControlPlane/serial/RestartClusterKeepsNodes 296.38
181 TestMultiControlPlane/serial/DeleteSecondaryNode 8.14
182 TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.26
183 TestMultiControlPlane/serial/StopCluster 24.98
185 TestMultiControlPlane/serial/DegradedAfterClusterRestart 0.25
187 TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.33
190 TestImageBuild/serial/Setup 40.53
191 TestImageBuild/serial/NormalBuild 1.39
192 TestImageBuild/serial/BuildWithBuildArg 0.51
193 TestImageBuild/serial/BuildWithDockerIgnore 0.24
194 TestImageBuild/serial/BuildWithSpecifiedDockerfile 0.22
198 TestJSONOutput/start/Command 55.17
199 TestJSONOutput/start/Audit 0
201 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
202 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
204 TestJSONOutput/pause/Command 0.46
205 TestJSONOutput/pause/Audit 0
207 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
208 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
210 TestJSONOutput/unpause/Command 0.46
211 TestJSONOutput/unpause/Audit 0
213 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
214 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
216 TestJSONOutput/stop/Command 8.34
217 TestJSONOutput/stop/Audit 0
219 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
220 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
221 TestErrorJSONOutput 0.57
226 TestMainNoArgs 0.08
227 TestMinikubeProfile 91.01
230 TestMountStart/serial/StartWithMountFirst 21.53
234 TestMultiNode/serial/FreshStart2Nodes 121.06
235 TestMultiNode/serial/DeployApp2Nodes 4.33
236 TestMultiNode/serial/PingHostFrom2Pods 0.88
237 TestMultiNode/serial/AddNode 47.67
238 TestMultiNode/serial/MultiNodeLabels 0.05
239 TestMultiNode/serial/ProfileList 0.18
240 TestMultiNode/serial/CopyFile 5.22
241 TestMultiNode/serial/StopNode 2.83
242 TestMultiNode/serial/StartAfterStop 156.33
243 TestMultiNode/serial/RestartKeepsNodes 296.51
244 TestMultiNode/serial/DeleteNode 3.41
245 TestMultiNode/serial/StopMultiNode 16.81
246 TestMultiNode/serial/RestartMultiNode 101.12
247 TestMultiNode/serial/ValidateNameConflict 44.47
251 TestPreload 215.04
253 TestScheduledStopUnix 106.64
254 TestSkaffold 116.21
257 TestRunningBinaryUpgrade 72.6
259 TestKubernetesUpgrade 240.57
261 TestStoppedBinaryUpgrade/Setup 1.36
262 TestStoppedBinaryUpgrade/Upgrade 107.33
263 TestStoppedBinaryUpgrade/MinikubeLogs 2.46
272 TestPause/serial/Start 58.18
274 TestNoKubernetes/serial/StartNoK8sWithVersion 0.66
275 TestNoKubernetes/serial/StartWithK8s 42.13
276 TestPause/serial/SecondStartNoReconfiguration 41.57
277 TestNoKubernetes/serial/StartWithStopK8s 8.39
279 TestPause/serial/Pause 0.56
280 TestPause/serial/VerifyStatus 0.16
281 TestPause/serial/Unpause 0.53
282 TestPause/serial/PauseAgain 0.53
283 TestPause/serial/DeletePaused 5.25
284 TestPause/serial/VerifyDeletedResources 0.19
296 TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current 4.24
297 TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current 7.75
298 TestNoKubernetes/serial/VerifyK8sNotRunning 0.13
299 TestNoKubernetes/serial/ProfileList 0.36
300 TestNoKubernetes/serial/Stop 8.38
301 TestNoKubernetes/serial/StartNoArgs 19.58
302 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.13
304 TestStartStop/group/old-k8s-version/serial/FirstStart 149.69
306 TestStartStop/group/no-preload/serial/FirstStart 97.9
307 TestStartStop/group/old-k8s-version/serial/DeployApp 9.49
308 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 0.73
309 TestStartStop/group/old-k8s-version/serial/Stop 8.39
310 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.32
311 TestStartStop/group/old-k8s-version/serial/SecondStart 416.7
312 TestStartStop/group/no-preload/serial/DeployApp 7.21
313 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 0.76
314 TestStartStop/group/no-preload/serial/Stop 8.44
315 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 0.34
316 TestStartStop/group/no-preload/serial/SecondStart 288.53
317 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 6
318 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 5.06
319 TestStartStop/group/no-preload/serial/VerifyKubernetesImages 0.16
320 TestStartStop/group/no-preload/serial/Pause 2.07
322 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 51.93
323 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 6.01
324 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 5.06
325 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 8.22
326 TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages 0.15
327 TestStartStop/group/old-k8s-version/serial/Pause 1.88
328 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 0.74
329 TestStartStop/group/default-k8s-diff-port/serial/Stop 8.42
331 TestStartStop/group/newest-cni/serial/FirstStart 157.71
332 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 0.31
333 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 300
334 TestStartStop/group/newest-cni/serial/DeployApp 0
335 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 0.86
336 TestStartStop/group/newest-cni/serial/Stop 8.41
337 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.33
338 TestStartStop/group/newest-cni/serial/SecondStart 145.17
339 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 6
340 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
341 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
342 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.16
343 TestStartStop/group/newest-cni/serial/Pause 1.83
344 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 5.06
345 TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages 0.15
346 TestStartStop/group/default-k8s-diff-port/serial/Pause 2.19
349 TestNetworkPlugins/group/auto/Start 60.6
350 TestNetworkPlugins/group/auto/KubeletFlags 0.15
351 TestNetworkPlugins/group/auto/NetCatPod 11.13
354 TestNetworkPlugins/group/auto/DNS 0.13
355 TestNetworkPlugins/group/auto/Localhost 0.11
356 TestNetworkPlugins/group/auto/HairPin 0.1
357 TestNetworkPlugins/group/calico/Start 82.45
358 TestStartStop/group/embed-certs/serial/Stop 8.38
359 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 0.31
360 TestStartStop/group/embed-certs/serial/SecondStart 51.59
361 TestNetworkPlugins/group/calico/ControllerPod 6.01
362 TestNetworkPlugins/group/calico/KubeletFlags 0.16
363 TestNetworkPlugins/group/calico/NetCatPod 12.14
364 TestNetworkPlugins/group/calico/DNS 0.13
365 TestNetworkPlugins/group/calico/Localhost 0.1
366 TestNetworkPlugins/group/calico/HairPin 0.1
367 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 11
368 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 5.07
369 TestStartStop/group/embed-certs/serial/VerifyKubernetesImages 0.15
370 TestStartStop/group/embed-certs/serial/Pause 2.08
371 TestNetworkPlugins/group/custom-flannel/Start 64.08
372 TestNetworkPlugins/group/false/Start 101.35
373 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.15
374 TestNetworkPlugins/group/custom-flannel/NetCatPod 11.15
375 TestNetworkPlugins/group/custom-flannel/DNS 0.14
376 TestNetworkPlugins/group/custom-flannel/Localhost 0.1
377 TestNetworkPlugins/group/custom-flannel/HairPin 0.1
378 TestNetworkPlugins/group/kindnet/Start 71.16
379 TestNetworkPlugins/group/false/KubeletFlags 0.16
380 TestNetworkPlugins/group/false/NetCatPod 12.14
381 TestNetworkPlugins/group/false/DNS 0.13
382 TestNetworkPlugins/group/false/Localhost 0.11
383 TestNetworkPlugins/group/false/HairPin 0.11
384 TestNetworkPlugins/group/flannel/Start 61.3
385 TestNetworkPlugins/group/kindnet/ControllerPod 6
386 TestNetworkPlugins/group/kindnet/KubeletFlags 0.17
387 TestNetworkPlugins/group/kindnet/NetCatPod 10.15
388 TestNetworkPlugins/group/kindnet/DNS 0.12
389 TestNetworkPlugins/group/kindnet/Localhost 0.1
390 TestNetworkPlugins/group/kindnet/HairPin 0.1
391 TestNetworkPlugins/group/enable-default-cni/Start 207.65
392 TestNetworkPlugins/group/flannel/ControllerPod 6
393 TestNetworkPlugins/group/flannel/KubeletFlags 0.16
394 TestNetworkPlugins/group/flannel/NetCatPod 11.15
395 TestNetworkPlugins/group/flannel/DNS 0.13
396 TestNetworkPlugins/group/flannel/Localhost 0.1
397 TestNetworkPlugins/group/flannel/HairPin 0.1
398 TestNetworkPlugins/group/bridge/Start 91.77
399 TestNetworkPlugins/group/bridge/KubeletFlags 0.16
400 TestNetworkPlugins/group/bridge/NetCatPod 10.13
401 TestNetworkPlugins/group/bridge/DNS 0.12
402 TestNetworkPlugins/group/bridge/Localhost 0.11
403 TestNetworkPlugins/group/bridge/HairPin 0.1
404 TestNetworkPlugins/group/kubenet/Start 93.1
405 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.17
406 TestNetworkPlugins/group/enable-default-cni/NetCatPod 11.16
407 TestNetworkPlugins/group/enable-default-cni/DNS 0.13
408 TestNetworkPlugins/group/enable-default-cni/Localhost 0.1
409 TestNetworkPlugins/group/enable-default-cni/HairPin 0.1
410 TestNetworkPlugins/group/kubenet/KubeletFlags 0.16
411 TestNetworkPlugins/group/kubenet/NetCatPod 10.14
412 TestNetworkPlugins/group/kubenet/DNS 0.13
413 TestNetworkPlugins/group/kubenet/Localhost 0.1
414 TestNetworkPlugins/group/kubenet/HairPin 0.1
x
+
TestDownloadOnly/v1.20.0/json-events (23.93s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-952000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-952000 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=docker --driver=hyperkit : (23.929801702s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (23.93s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
--- PASS: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-952000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-952000: exit status 85 (290.282229ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-952000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |          |
	|         | -p download-only-952000        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=docker     |                      |         |         |                     |          |
	|         | --driver=hyperkit              |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 03:28:20
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 03:28:20.585530    1639 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:28:20.586212    1639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:20.586222    1639 out.go:304] Setting ErrFile to fd 2...
	I0722 03:28:20.586228    1639 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:20.586818    1639 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	W0722 03:28:20.586934    1639 root.go:314] Error reading config file at /Users/jenkins/minikube-integration/19313-1111/.minikube/config/config.json: open /Users/jenkins/minikube-integration/19313-1111/.minikube/config/config.json: no such file or directory
	I0722 03:28:20.588839    1639 out.go:298] Setting JSON to true
	I0722 03:28:20.611689    1639 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1669,"bootTime":1721642431,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 03:28:20.611784    1639 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:28:20.634093    1639 out.go:97] [download-only-952000] minikube v1.33.1 on Darwin 14.5
	I0722 03:28:20.634318    1639 notify.go:220] Checking for updates...
	W0722 03:28:20.634339    1639 preload.go:293] Failed to list preload files: open /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball: no such file or directory
	I0722 03:28:20.655426    1639 out.go:169] MINIKUBE_LOCATION=19313
	I0722 03:28:20.676957    1639 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:28:20.698601    1639 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 03:28:20.719574    1639 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:28:20.740805    1639 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	W0722 03:28:20.782610    1639 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 03:28:20.783119    1639 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:28:20.841841    1639 out.go:97] Using the hyperkit driver based on user configuration
	I0722 03:28:20.841918    1639 start.go:297] selected driver: hyperkit
	I0722 03:28:20.841930    1639 start.go:901] validating driver "hyperkit" against <nil>
	I0722 03:28:20.842127    1639 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:20.842490    1639 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19313-1111/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0722 03:28:21.244442    1639 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0722 03:28:21.249866    1639 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:28:21.249890    1639 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0722 03:28:21.249920    1639 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 03:28:21.253947    1639 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0722 03:28:21.254627    1639 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 03:28:21.254680    1639 cni.go:84] Creating CNI manager for ""
	I0722 03:28:21.254697    1639 cni.go:162] CNI unnecessary in this configuration, recommending no CNI
	I0722 03:28:21.254768    1639 start.go:340] cluster config:
	{Name:download-only-952000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-952000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin: FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:28:21.254981    1639 iso.go:125] acquiring lock: {Name:mk28fa3b914b659bb36b0449a0ad3ab1345dae32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:21.276436    1639 out.go:97] Downloading VM boot image ...
	I0722 03:28:21.276524    1639 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso.sha256 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/iso/amd64/minikube-v1.33.1-1721324531-19298-amd64.iso
	I0722 03:28:30.063166    1639 out.go:97] Starting "download-only-952000" primary control-plane node in "download-only-952000" cluster
	I0722 03:28:30.063182    1639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 03:28:30.113967    1639 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0722 03:28:30.113987    1639 cache.go:56] Caching tarball of preloaded images
	I0722 03:28:30.114149    1639 preload.go:131] Checking if preload exists for k8s version v1.20.0 and runtime docker
	I0722 03:28:30.133871    1639 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0722 03:28:30.133889    1639 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0722 03:28:30.212934    1639 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4?checksum=md5:9a82241e9b8b4ad2b5cca73108f2c7a3 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4
	I0722 03:28:40.155262    1639 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	I0722 03:28:40.155458    1639 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-docker-overlay2-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-952000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-952000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-952000
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/json-events (10.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-374000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-374000 --force --alsologtostderr --kubernetes-version=v1.30.3 --container-runtime=docker --driver=hyperkit : (10.133415524s)
--- PASS: TestDownloadOnly/v1.30.3/json-events (10.13s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/preload-exists
--- PASS: TestDownloadOnly/v1.30.3/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/kubectl
--- PASS: TestDownloadOnly/v1.30.3/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/LogsDuration (0.31s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-374000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-374000: exit status 85 (312.491341ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-952000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-952000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| delete  | -p download-only-952000        | download-only-952000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| start   | -o=json --download-only        | download-only-374000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-374000        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3   |                      |         |         |                     |                     |
	|         | --container-runtime=docker     |                      |         |         |                     |                     |
	|         | --driver=hyperkit              |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 03:28:45
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 03:28:45.245543    1668 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:28:45.245793    1668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:45.245799    1668 out.go:304] Setting ErrFile to fd 2...
	I0722 03:28:45.245803    1668 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:45.245984    1668 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 03:28:45.247405    1668 out.go:298] Setting JSON to true
	I0722 03:28:45.269737    1668 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1694,"bootTime":1721642431,"procs":427,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 03:28:45.269820    1668 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:28:45.291480    1668 out.go:97] [download-only-374000] minikube v1.33.1 on Darwin 14.5
	I0722 03:28:45.291616    1668 notify.go:220] Checking for updates...
	I0722 03:28:45.312243    1668 out.go:169] MINIKUBE_LOCATION=19313
	I0722 03:28:45.333376    1668 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:28:45.354231    1668 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 03:28:45.375524    1668 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:28:45.396494    1668 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	W0722 03:28:45.438436    1668 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 03:28:45.438923    1668 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:28:45.468346    1668 out.go:97] Using the hyperkit driver based on user configuration
	I0722 03:28:45.468429    1668 start.go:297] selected driver: hyperkit
	I0722 03:28:45.468440    1668 start.go:901] validating driver "hyperkit" against <nil>
	I0722 03:28:45.468649    1668 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:45.468886    1668 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19313-1111/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0722 03:28:45.478574    1668 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0722 03:28:45.482966    1668 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:28:45.483001    1668 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0722 03:28:45.483024    1668 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 03:28:45.485968    1668 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0722 03:28:45.486126    1668 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 03:28:45.486171    1668 cni.go:84] Creating CNI manager for ""
	I0722 03:28:45.486185    1668 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 03:28:45.486195    1668 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 03:28:45.486271    1668 start.go:340] cluster config:
	{Name:download-only-374000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.30.3 ClusterName:download-only-374000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:28:45.486373    1668 iso.go:125] acquiring lock: {Name:mk28fa3b914b659bb36b0449a0ad3ab1345dae32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:45.507049    1668 out.go:97] Starting "download-only-374000" primary control-plane node in "download-only-374000" cluster
	I0722 03:28:45.507083    1668 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:28:45.563988    1668 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 03:28:45.564061    1668 cache.go:56] Caching tarball of preloaded images
	I0722 03:28:45.564519    1668 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:28:45.586140    1668 out.go:97] Downloading Kubernetes v1.30.3 preload ...
	I0722 03:28:45.586210    1668 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0722 03:28:45.667601    1668 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.30.3/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4?checksum=md5:6304692df2fe6f7b0bdd7f93d160be8c -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4
	I0722 03:28:52.909777    1668 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0722 03:28:52.909962    1668 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.30.3-docker-overlay2-amd64.tar.lz4 ...
	I0722 03:28:53.401457    1668 cache.go:59] Finished verifying existence of preloaded tar for v1.30.3 on docker
	I0722 03:28:53.401697    1668 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/download-only-374000/config.json ...
	I0722 03:28:53.401721    1668 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/download-only-374000/config.json: {Name:mk61c5c2249ffccd3510c06b783a2ff81dee6a03 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:28:53.402025    1668 preload.go:131] Checking if preload exists for k8s version v1.30.3 and runtime docker
	I0722 03:28:53.402244    1668 download.go:107] Downloading: https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.30.3/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/darwin/amd64/v1.30.3/kubectl
	
	
	* The control-plane node download-only-374000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-374000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.30.3/LogsDuration (0.31s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.30.3/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-374000
--- PASS: TestDownloadOnly/v1.30.3/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/json-events (16.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -o=json --download-only -p download-only-446000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit 
aaa_download_only_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -o=json --download-only -p download-only-446000 --force --alsologtostderr --kubernetes-version=v1.31.0-beta.0 --container-runtime=docker --driver=hyperkit : (16.229358424s)
--- PASS: TestDownloadOnly/v1.31.0-beta.0/json-events (16.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/preload-exists
--- PASS: TestDownloadOnly/v1.31.0-beta.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/kubectl
--- PASS: TestDownloadOnly/v1.31.0-beta.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 logs -p download-only-446000
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-darwin-amd64 logs -p download-only-446000: exit status 85 (290.143089ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                Args                 |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only             | download-only-952000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-952000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| delete  | -p download-only-952000             | download-only-952000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| start   | -o=json --download-only             | download-only-374000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-374000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.30.3        |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	| delete  | --all                               | minikube             | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| delete  | -p download-only-374000             | download-only-374000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT | 22 Jul 24 03:28 PDT |
	| start   | -o=json --download-only             | download-only-446000 | jenkins | v1.33.1 | 22 Jul 24 03:28 PDT |                     |
	|         | -p download-only-446000             |                      |         |         |                     |                     |
	|         | --force --alsologtostderr           |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.31.0-beta.0 |                      |         |         |                     |                     |
	|         | --container-runtime=docker          |                      |         |         |                     |                     |
	|         | --driver=hyperkit                   |                      |         |         |                     |                     |
	|---------|-------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/07/22 03:28:56
	Running on machine: MacOS-Agent-2
	Binary: Built with gc go1.22.5 for darwin/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0722 03:28:56.131789    1692 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:28:56.132035    1692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:56.132040    1692 out.go:304] Setting ErrFile to fd 2...
	I0722 03:28:56.132044    1692 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:28:56.132219    1692 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 03:28:56.133734    1692 out.go:298] Setting JSON to true
	I0722 03:28:56.156011    1692 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":1705,"bootTime":1721642431,"procs":417,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 03:28:56.156100    1692 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:28:56.177834    1692 out.go:97] [download-only-446000] minikube v1.33.1 on Darwin 14.5
	I0722 03:28:56.178080    1692 notify.go:220] Checking for updates...
	I0722 03:28:56.199663    1692 out.go:169] MINIKUBE_LOCATION=19313
	I0722 03:28:56.241357    1692 out.go:169] KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:28:56.262771    1692 out.go:169] MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 03:28:56.283773    1692 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:28:56.304703    1692 out.go:169] MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	W0722 03:28:56.346799    1692 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0722 03:28:56.347280    1692 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:28:56.377544    1692 out.go:97] Using the hyperkit driver based on user configuration
	I0722 03:28:56.377613    1692 start.go:297] selected driver: hyperkit
	I0722 03:28:56.377624    1692 start.go:901] validating driver "hyperkit" against <nil>
	I0722 03:28:56.377832    1692 install.go:52] acquiring lock: {Name:mk4023283b30b374c3f04c8805d539e68824c0b8 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:56.378093    1692 install.go:117] Validating docker-machine-driver-hyperkit, PATH=/Users/jenkins/minikube-integration/19313-1111/.minikube/bin:/Users/jenkins/workspace/out/:/usr/bin:/bin:/usr/sbin:/sbin:/Users/jenkins/google-cloud-sdk/bin:/usr/local/bin/:/usr/local/go/bin/:/Users/jenkins/go/bin
	I0722 03:28:56.388073    1692 install.go:137] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit version is 1.33.1
	I0722 03:28:56.392403    1692 install.go:79] stdout: /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:28:56.392422    1692 install.go:81] /Users/jenkins/workspace/out/docker-machine-driver-hyperkit looks good
	I0722 03:28:56.392447    1692 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0722 03:28:56.395198    1692 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32768MB, container=0MB
	I0722 03:28:56.395333    1692 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0722 03:28:56.395354    1692 cni.go:84] Creating CNI manager for ""
	I0722 03:28:56.395369    1692 cni.go:158] "hyperkit" driver + "docker" container runtime found on kubernetes v1.24+, recommending bridge
	I0722 03:28:56.395382    1692 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0722 03:28:56.395454    1692 start.go:340] cluster config:
	{Name:download-only-446000 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:6000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.31.0-beta.0 ClusterName:download-only-446000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster
.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.31.0-beta.0 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:28:56.395536    1692 iso.go:125] acquiring lock: {Name:mk28fa3b914b659bb36b0449a0ad3ab1345dae32 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0722 03:28:56.416784    1692 out.go:97] Starting "download-only-446000" primary control-plane node in "download-only-446000" cluster
	I0722 03:28:56.416840    1692 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 03:28:56.473469    1692 preload.go:118] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0722 03:28:56.473506    1692 cache.go:56] Caching tarball of preloaded images
	I0722 03:28:56.473880    1692 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 03:28:56.495596    1692 out.go:97] Downloading Kubernetes v1.31.0-beta.0 preload ...
	I0722 03:28:56.495622    1692 preload.go:236] getting checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0722 03:28:56.579081    1692 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.31.0-beta.0/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4?checksum=md5:181d3c061f7abe363e688bf9ac3c9580 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4
	I0722 03:29:07.662728    1692 preload.go:247] saving checksum for preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0722 03:29:07.662967    1692 preload.go:254] verifying checksum of /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.31.0-beta.0-docker-overlay2-amd64.tar.lz4 ...
	I0722 03:29:08.121885    1692 cache.go:59] Finished verifying existence of preloaded tar for v1.31.0-beta.0 on docker
	I0722 03:29:08.122155    1692 profile.go:143] Saving config to /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/download-only-446000/config.json ...
	I0722 03:29:08.122177    1692 lock.go:35] WriteFile acquiring /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/download-only-446000/config.json: {Name:mk70f1631d263d7c9a136d3ebc452648b20abf30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0722 03:29:08.122529    1692 preload.go:131] Checking if preload exists for k8s version v1.31.0-beta.0 and runtime docker
	I0722 03:29:08.122797    1692 download.go:107] Downloading: https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.31.0-beta.0/bin/darwin/amd64/kubectl.sha256 -> /Users/jenkins/minikube-integration/19313-1111/.minikube/cache/darwin/amd64/v1.31.0-beta.0/kubectl
	
	
	* The control-plane node download-only-446000 host does not exist
	  To start a cluster, run: "minikube start -p download-only-446000"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.31.0-beta.0/LogsDuration (0.29s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-darwin-amd64 delete --all
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAll (0.23s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-darwin-amd64 delete -p download-only-446000
--- PASS: TestDownloadOnly/v1.31.0-beta.0/DeleteAlwaysSucceeds (0.21s)

                                                
                                    
x
+
TestBinaryMirror (0.93s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 start --download-only -p binary-mirror-094000 --alsologtostderr --binary-mirror http://127.0.0.1:49542 --driver=hyperkit 
helpers_test.go:175: Cleaning up "binary-mirror-094000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p binary-mirror-094000
--- PASS: TestBinaryMirror (0.93s)

                                                
                                    
x
+
TestOffline (61.34s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 start -p offline-docker-685000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit 
aab_offline_test.go:55: (dbg) Done: out/minikube-darwin-amd64 start -p offline-docker-685000 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=hyperkit : (56.038800284s)
helpers_test.go:175: Cleaning up "offline-docker-685000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p offline-docker-685000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p offline-docker-685000: (5.303201235s)
--- PASS: TestOffline (61.34s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:1029: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-616000
addons_test.go:1029: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons enable dashboard -p addons-616000: exit status 85 (207.917451ms)

                                                
                                                
-- stdout --
	* Profile "addons-616000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-616000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.21s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:1040: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-616000
addons_test.go:1040: (dbg) Non-zero exit: out/minikube-darwin-amd64 addons disable dashboard -p addons-616000: exit status 85 (187.371363ms)

                                                
                                                
-- stdout --
	* Profile "addons-616000" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-616000"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.19s)

                                                
                                    
x
+
TestAddons/Setup (211.75s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 start -p addons-616000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:110: (dbg) Done: out/minikube-darwin-amd64 start -p addons-616000 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --addons=volcano --driver=hyperkit  --addons=ingress --addons=ingress-dns --addons=helm-tiller: (3m31.751223883s)
--- PASS: TestAddons/Setup (211.75s)

                                                
                                    
x
+
TestAddons/parallel/Registry (14.53s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:332: registry stabilized in 9.788596ms
addons_test.go:334: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-656c9c8d9c-frcrc" [096cd948-cd8a-4100-bcd6-6cbdf6d93bff] Running
addons_test.go:334: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.003585817s
addons_test.go:337: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-92n88" [268b00ac-8e90-4991-a875-51d75011fe27] Running
addons_test.go:337: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 5.004722983s
addons_test.go:342: (dbg) Run:  kubectl --context addons-616000 delete po -l run=registry-test --now
addons_test.go:347: (dbg) Run:  kubectl --context addons-616000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:347: (dbg) Done: kubectl --context addons-616000 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (3.875174744s)
addons_test.go:361: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 ip
2024/07/22 03:33:00 [DEBUG] GET http://192.169.0.2:5000
addons_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (14.53s)

                                                
                                    
x
+
TestAddons/parallel/Ingress (19.15s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:209: (dbg) Run:  kubectl --context addons-616000 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:234: (dbg) Run:  kubectl --context addons-616000 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:247: (dbg) Run:  kubectl --context addons-616000 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [cce6cc15-0959-4058-ba36-5ddcd85dd925] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [cce6cc15-0959-4058-ba36-5ddcd85dd925] Running
addons_test.go:252: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.003639907s
addons_test.go:264: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:288: (dbg) Run:  kubectl --context addons-616000 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:293: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 ip
addons_test.go:299: (dbg) Run:  nslookup hello-john.test 192.169.0.2
addons_test.go:308: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:313: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 addons disable ingress --alsologtostderr -v=1
addons_test.go:313: (dbg) Done: out/minikube-darwin-amd64 -p addons-616000 addons disable ingress --alsologtostderr -v=1: (7.536873636s)
--- PASS: TestAddons/parallel/Ingress (19.15s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (10.54s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-2557q" [30a9d3b8-abba-4759-966b-14d3644d9f2b] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:840: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 5.00324875s
addons_test.go:843: (dbg) Run:  out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-616000
addons_test.go:843: (dbg) Done: out/minikube-darwin-amd64 addons disable inspektor-gadget -p addons-616000: (5.539369452s)
--- PASS: TestAddons/parallel/InspektorGadget (10.54s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (5.47s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:409: metrics-server stabilized in 1.689024ms
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-c59844bb4-j28pt" [beec96ed-a606-424d-b2ce-8f9a9265ece3] Running
addons_test.go:411: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 5.0030377s
addons_test.go:417: (dbg) Run:  kubectl --context addons-616000 top pods -n kube-system
addons_test.go:434: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (5.47s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (10.16s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:458: tiller-deploy stabilized in 1.71725ms
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-6677d64bcd-5nd2m" [1eafd75e-7801-422e-909f-185a8e26618d] Running
addons_test.go:460: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.004538312s
addons_test.go:475: (dbg) Run:  kubectl --context addons-616000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:475: (dbg) Done: kubectl --context addons-616000 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.728358881s)
addons_test.go:492: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (10.16s)

                                                
                                    
x
+
TestAddons/parallel/CSI (50.57s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:563: csi-hostpath-driver pods stabilized in 4.208145ms
addons_test.go:566: (dbg) Run:  kubectl --context addons-616000 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:571: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:576: (dbg) Run:  kubectl --context addons-616000 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:581: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [cd514e18-0b4f-4596-b8c6-ecc834cb9814] Pending
helpers_test.go:344: "task-pv-pod" [cd514e18-0b4f-4596-b8c6-ecc834cb9814] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [cd514e18-0b4f-4596-b8c6-ecc834cb9814] Running
addons_test.go:581: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 8.003888114s
addons_test.go:586: (dbg) Run:  kubectl --context addons-616000 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:591: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-616000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:419: (dbg) Run:  kubectl --context addons-616000 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:596: (dbg) Run:  kubectl --context addons-616000 delete pod task-pv-pod
addons_test.go:602: (dbg) Run:  kubectl --context addons-616000 delete pvc hpvc
addons_test.go:608: (dbg) Run:  kubectl --context addons-616000 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:613: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:618: (dbg) Run:  kubectl --context addons-616000 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:623: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [688beb98-10b3-4060-9c8f-20ff0bd65c2a] Pending
helpers_test.go:344: "task-pv-pod-restore" [688beb98-10b3-4060-9c8f-20ff0bd65c2a] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [688beb98-10b3-4060-9c8f-20ff0bd65c2a] Running
addons_test.go:623: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 10.00736284s
addons_test.go:628: (dbg) Run:  kubectl --context addons-616000 delete pod task-pv-pod-restore
addons_test.go:628: (dbg) Done: kubectl --context addons-616000 delete pod task-pv-pod-restore: (1.000055131s)
addons_test.go:632: (dbg) Run:  kubectl --context addons-616000 delete pvc hpvc-restore
addons_test.go:636: (dbg) Run:  kubectl --context addons-616000 delete volumesnapshot new-snapshot-demo
addons_test.go:640: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:640: (dbg) Done: out/minikube-darwin-amd64 -p addons-616000 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.460941993s)
addons_test.go:644: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (50.57s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (12.92s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:826: (dbg) Run:  out/minikube-darwin-amd64 addons enable headlamp -p addons-616000 --alsologtostderr -v=1
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-7867546754-stz76" [00295775-1a2c-4100-8b79-9407245c9cfb] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-7867546754-stz76" [00295775-1a2c-4100-8b79-9407245c9cfb] Running
addons_test.go:831: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 12.004084938s
--- PASS: TestAddons/parallel/Headlamp (12.92s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6fcd4f6f98-46fht" [940ab48b-b5f6-46da-afd7-b25035ae6085] Running
addons_test.go:859: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 5.002675303s
addons_test.go:862: (dbg) Run:  out/minikube-darwin-amd64 addons disable cloud-spanner -p addons-616000
--- PASS: TestAddons/parallel/CloudSpanner (5.36s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (58.43s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:974: (dbg) Run:  kubectl --context addons-616000 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:980: (dbg) Run:  kubectl --context addons-616000 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:984: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-616000 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [5364f690-4c1e-42b2-b6d8-2ee04c1f96ad] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [5364f690-4c1e-42b2-b6d8-2ee04c1f96ad] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [5364f690-4c1e-42b2-b6d8-2ee04c1f96ad] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:987: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 9.003005237s
addons_test.go:992: (dbg) Run:  kubectl --context addons-616000 get pvc test-pvc -o=json
addons_test.go:1001: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 ssh "cat /opt/local-path-provisioner/pvc-5900363d-4279-4a56-8837-47f38d031d52_default_test-pvc/file1"
addons_test.go:1013: (dbg) Run:  kubectl --context addons-616000 delete pod test-local-path
addons_test.go:1017: (dbg) Run:  kubectl --context addons-616000 delete pvc test-pvc
addons_test.go:1021: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:1021: (dbg) Done: out/minikube-darwin-amd64 -p addons-616000 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (42.780965729s)
--- PASS: TestAddons/parallel/LocalPath (58.43s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (5.35s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-g77f4" [ada78ad2-2b44-45a0-8ec2-be162f910857] Running
addons_test.go:1053: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 5.00545496s
addons_test.go:1056: (dbg) Run:  out/minikube-darwin-amd64 addons disable nvidia-device-plugin -p addons-616000
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (5.35s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (5.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-799879c74f-qbv27" [95b48ef2-ce3f-45c0-a8b1-5b42b9241774] Running
addons_test.go:1064: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 5.005359742s
--- PASS: TestAddons/parallel/Yakd (5.01s)

                                                
                                    
x
+
TestAddons/parallel/Volcano (39.21s)

                                                
                                                
=== RUN   TestAddons/parallel/Volcano
=== PAUSE TestAddons/parallel/Volcano

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Volcano
addons_test.go:889: volcano-scheduler stabilized in 1.329674ms
addons_test.go:897: volcano-admission stabilized in 1.741918ms
addons_test.go:905: volcano-controller stabilized in 2.016887ms
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-scheduler" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-scheduler-844f6db89b-b2k96" [b2ce72e5-ee3b-41a2-bc4b-4ce948eb8075] Running
addons_test.go:911: (dbg) TestAddons/parallel/Volcano: app=volcano-scheduler healthy within 5.004463999s
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-admission" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-admission-5f7844f7bc-n9tjb" [f2cc25a0-3290-46ad-8a46-afefa5a1af2f] Running
addons_test.go:915: (dbg) TestAddons/parallel/Volcano: app=volcano-admission healthy within 5.004809491s
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: waiting 6m0s for pods matching "app=volcano-controller" in namespace "volcano-system" ...
helpers_test.go:344: "volcano-controllers-59cb4746db-hng6l" [5b9ea33d-1f8b-4112-9eb2-27f8b220d3ff] Running
addons_test.go:919: (dbg) TestAddons/parallel/Volcano: app=volcano-controller healthy within 5.008013158s
addons_test.go:924: (dbg) Run:  kubectl --context addons-616000 delete -n volcano-system job volcano-admission-init
addons_test.go:930: (dbg) Run:  kubectl --context addons-616000 create -f testdata/vcjob.yaml
addons_test.go:938: (dbg) Run:  kubectl --context addons-616000 get vcjob -n my-volcano
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: waiting 3m0s for pods matching "volcano.sh/job-name=test-job" in namespace "my-volcano" ...
helpers_test.go:344: "test-job-nginx-0" [2cbd2ae3-ae7b-4c51-a1a3-3e36d1058681] Pending
helpers_test.go:344: "test-job-nginx-0" [2cbd2ae3-ae7b-4c51-a1a3-3e36d1058681] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "test-job-nginx-0" [2cbd2ae3-ae7b-4c51-a1a3-3e36d1058681] Running
addons_test.go:956: (dbg) TestAddons/parallel/Volcano: volcano.sh/job-name=test-job healthy within 14.003896474s
addons_test.go:960: (dbg) Run:  out/minikube-darwin-amd64 -p addons-616000 addons disable volcano --alsologtostderr -v=1
addons_test.go:960: (dbg) Done: out/minikube-darwin-amd64 -p addons-616000 addons disable volcano --alsologtostderr -v=1: (9.940100544s)
--- PASS: TestAddons/parallel/Volcano (39.21s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:652: (dbg) Run:  kubectl --context addons-616000 create ns new-namespace
addons_test.go:666: (dbg) Run:  kubectl --context addons-616000 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.09s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (5.93s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:174: (dbg) Run:  out/minikube-darwin-amd64 stop -p addons-616000
addons_test.go:174: (dbg) Done: out/minikube-darwin-amd64 stop -p addons-616000: (5.383355447s)
addons_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p addons-616000
addons_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 addons disable dashboard -p addons-616000
addons_test.go:187: (dbg) Run:  out/minikube-darwin-amd64 addons disable gvisor -p addons-616000
--- PASS: TestAddons/StoppedEnableDisable (5.93s)

                                                
                                    
x
+
TestCertOptions (42.77s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-darwin-amd64 start -p cert-options-926000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit 
E0722 04:32:46.729436    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 04:33:01.007150    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
cert_options_test.go:49: (dbg) Done: out/minikube-darwin-amd64 start -p cert-options-926000 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=hyperkit : (39.033173029s)
cert_options_test.go:60: (dbg) Run:  out/minikube-darwin-amd64 -p cert-options-926000 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-926000 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-darwin-amd64 ssh -p cert-options-926000 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-926000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cert-options-926000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p cert-options-926000: (3.380520491s)
--- PASS: TestCertOptions (42.77s)

                                                
                                    
x
+
TestDockerFlags (55.54s)

                                                
                                                
=== RUN   TestDockerFlags
=== PAUSE TestDockerFlags

                                                
                                                

                                                
                                                
=== CONT  TestDockerFlags
docker_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 start -p docker-flags-856000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit 
E0722 04:31:59.565514    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:32:20.045551    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
docker_test.go:51: (dbg) Done: out/minikube-darwin-amd64 start -p docker-flags-856000 --cache-images=false --memory=2048 --install-addons=false --wait=false --docker-env=FOO=BAR --docker-env=BAZ=BAT --docker-opt=debug --docker-opt=icc=true --alsologtostderr -v=5 --driver=hyperkit : (49.960329632s)
docker_test.go:56: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-856000 ssh "sudo systemctl show docker --property=Environment --no-pager"
docker_test.go:67: (dbg) Run:  out/minikube-darwin-amd64 -p docker-flags-856000 ssh "sudo systemctl show docker --property=ExecStart --no-pager"
helpers_test.go:175: Cleaning up "docker-flags-856000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p docker-flags-856000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p docker-flags-856000: (5.250908116s)
--- PASS: TestDockerFlags (55.54s)

                                                
                                    
x
+
TestForceSystemdFlag (41.56s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-flag-826000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit 
E0722 04:31:49.324544    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
docker_test.go:91: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-flag-826000 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=hyperkit : (36.159981241s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-flag-826000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-flag-826000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-flag-826000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-flag-826000: (5.23380305s)
--- PASS: TestForceSystemdFlag (41.56s)

                                                
                                    
x
+
TestForceSystemdEnv (44.57s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-darwin-amd64 start -p force-systemd-env-128000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit 
docker_test.go:155: (dbg) Done: out/minikube-darwin-amd64 start -p force-systemd-env-128000 --memory=2048 --alsologtostderr -v=5 --driver=hyperkit : (40.478398558s)
docker_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 -p force-systemd-env-128000 ssh "docker info --format {{.CgroupDriver}}"
helpers_test.go:175: Cleaning up "force-systemd-env-128000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p force-systemd-env-128000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p force-systemd-env-128000: (3.925894078s)
--- PASS: TestForceSystemdEnv (44.57s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (8.89s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
=== PAUSE TestHyperKitDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestHyperKitDriverInstallOrUpdate
--- PASS: TestHyperKitDriverInstallOrUpdate (8.89s)

                                                
                                    
x
+
TestErrorSpam/setup (35.46s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-darwin-amd64 start -p nospam-582000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 --driver=hyperkit 
error_spam_test.go:81: (dbg) Done: out/minikube-darwin-amd64 start -p nospam-582000 -n=1 --memory=2250 --wait=false --log_dir=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 --driver=hyperkit : (35.45907828s)
--- PASS: TestErrorSpam/setup (35.46s)

                                                
                                    
x
+
TestErrorSpam/start (1.6s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 start --dry-run
--- PASS: TestErrorSpam/start (1.60s)

                                                
                                    
x
+
TestErrorSpam/status (0.49s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 status
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 status
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 status
--- PASS: TestErrorSpam/status (0.49s)

                                                
                                    
x
+
TestErrorSpam/pause (1.29s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 pause
--- PASS: TestErrorSpam/pause (1.29s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.36s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 unpause
--- PASS: TestErrorSpam/unpause (1.36s)

                                                
                                    
x
+
TestErrorSpam/stop (155.83s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 stop: (5.399545637s)
error_spam_test.go:159: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 stop
error_spam_test.go:159: (dbg) Done: out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 stop: (1m15.226058821s)
error_spam_test.go:182: (dbg) Run:  out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 stop
E0722 03:37:46.636137    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:46.643513    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:46.655847    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:46.677285    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:46.719551    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:46.800121    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:46.962377    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:47.284533    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:47.926898    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:49.208291    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:51.769470    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:37:56.889716    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:38:07.131761    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
error_spam_test.go:182: (dbg) Done: out/minikube-darwin-amd64 -p nospam-582000 --log_dir /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/nospam-582000 stop: (1m15.19993702s)
--- PASS: TestErrorSpam/stop (155.83s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /Users/jenkins/minikube-integration/19313-1111/.minikube/files/etc/test/nested/copy/1637/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (55.49s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-963000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit 
E0722 03:38:27.612569    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:39:08.572576    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
functional_test.go:2230: (dbg) Done: out/minikube-darwin-amd64 start -p functional-963000 --memory=4000 --apiserver-port=8441 --wait=all --driver=hyperkit : (55.485434412s)
--- PASS: TestFunctional/serial/StartWithProxy (55.49s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (39.97s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-963000 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-darwin-amd64 start -p functional-963000 --alsologtostderr -v=8: (39.967268548s)
functional_test.go:659: soft start took 39.967816532s for "functional-963000" cluster.
--- PASS: TestFunctional/serial/SoftStart (39.97s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-963000 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-darwin-amd64 -p functional-963000 cache add registry.k8s.io/pause:3.1: (1.279597374s)
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 cache add registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.13s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialCacheCmdcacheadd_local1215751999/001
functional_test.go:1085: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 cache add minikube-local-cache-test:functional-963000
functional_test.go:1090: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 cache delete minikube-local-cache-test:functional-963000
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-963000
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.36s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-darwin-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.17s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh sudo docker rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (152.420981ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.08s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-darwin-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.16s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (1.18s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 kubectl -- --context functional-963000 get pods
functional_test.go:712: (dbg) Done: out/minikube-darwin-amd64 -p functional-963000 kubectl -- --context functional-963000 get pods: (1.182314131s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (1.18s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (1.49s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-963000 get pods
functional_test.go:737: (dbg) Done: out/kubectl --context functional-963000 get pods: (1.491730869s)
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (1.49s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (59.97s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-963000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
E0722 03:40:30.492034    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
functional_test.go:753: (dbg) Done: out/minikube-darwin-amd64 start -p functional-963000 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (59.97224483s)
functional_test.go:757: restart took 59.972378222s for "functional-963000" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (59.97s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-963000 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.05s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (2.71s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 logs
functional_test.go:1232: (dbg) Done: out/minikube-darwin-amd64 -p functional-963000 logs: (2.70917272s)
--- PASS: TestFunctional/serial/LogsCmd (2.71s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (2.72s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1141033035/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-darwin-amd64 -p functional-963000 logs --file /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalserialLogsFileCmd1141033035/001/logs.txt: (2.713749287s)
--- PASS: TestFunctional/serial/LogsFileCmd (2.72s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.6s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-darwin-amd64 service invalid-svc -p functional-963000
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-darwin-amd64 service invalid-svc -p functional-963000: exit status 115 (264.453797ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|--------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |           URL            |
	|-----------|-------------|-------------|--------------------------|
	| default   | invalid-svc |          80 | http://192.169.0.4:32465 |
	|-----------|-------------|-------------|--------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                            │
	│    * If the above advice does not help, please let us know:                                                                │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                              │
	│                                                                                                                            │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                   │
	│    * Please also attach the following file to the GitHub issue:                                                            │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log    │
	│                                                                                                                            │
	╰────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-963000 delete -f testdata/invalidsvc.yaml
functional_test.go:2323: (dbg) Done: kubectl --context functional-963000 delete -f testdata/invalidsvc.yaml: (1.199333584s)
--- PASS: TestFunctional/serial/InvalidService (4.60s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 config get cpus: exit status 14 (70.723231ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 config get cpus: exit status 14 (54.950761ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (11.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-963000 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-darwin-amd64 dashboard --url --port 36195 -p functional-963000 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 3140: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (11.36s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (1.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-963000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:970: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-963000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (497.220731ms)

                                                
                                                
-- stdout --
	* [functional-963000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the hyperkit driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:42:15.992615    3100 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:42:15.992794    3100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:42:15.992799    3100 out.go:304] Setting ErrFile to fd 2...
	I0722 03:42:15.992802    3100 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:42:15.992964    3100 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 03:42:15.994438    3100 out.go:298] Setting JSON to false
	I0722 03:42:16.017126    3100 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2505,"bootTime":1721642431,"procs":489,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 03:42:16.017218    3100 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:42:16.038772    3100 out.go:177] * [functional-963000] minikube v1.33.1 on Darwin 14.5
	I0722 03:42:16.080785    3100 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 03:42:16.080856    3100 notify.go:220] Checking for updates...
	I0722 03:42:16.123611    3100 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:42:16.144752    3100 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 03:42:16.165425    3100 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:42:16.186610    3100 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	I0722 03:42:16.207779    3100 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 03:42:16.229376    3100 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:42:16.230066    3100 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:42:16.230143    3100 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:42:16.239599    3100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50775
	I0722 03:42:16.239963    3100 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:42:16.240433    3100 main.go:141] libmachine: Using API Version  1
	I0722 03:42:16.240444    3100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:42:16.240660    3100 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:42:16.240787    3100 main.go:141] libmachine: (functional-963000) Calling .DriverName
	I0722 03:42:16.241015    3100 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:42:16.241277    3100 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:42:16.241310    3100 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:42:16.249625    3100 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50777
	I0722 03:42:16.249979    3100 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:42:16.250315    3100 main.go:141] libmachine: Using API Version  1
	I0722 03:42:16.250325    3100 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:42:16.250565    3100 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:42:16.250678    3100 main.go:141] libmachine: (functional-963000) Calling .DriverName
	I0722 03:42:16.279646    3100 out.go:177] * Using the hyperkit driver based on existing profile
	I0722 03:42:16.321449    3100 start.go:297] selected driver: hyperkit
	I0722 03:42:16.321473    3100 start.go:901] validating driver "hyperkit" against &{Name:functional-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:42:16.321709    3100 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 03:42:16.348611    3100 out.go:177] 
	W0722 03:42:16.369510    3100 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0722 03:42:16.390602    3100 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-963000 --dry-run --alsologtostderr -v=1 --driver=hyperkit 
--- PASS: TestFunctional/parallel/DryRun (1.01s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-darwin-amd64 start -p functional-963000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit 
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p functional-963000 --dry-run --memory 250MB --alsologtostderr --driver=hyperkit : exit status 23 (463.12421ms)

                                                
                                                
-- stdout --
	* [functional-963000] minikube v1.33.1 sur Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote hyperkit basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:42:16.996034    3116 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:42:16.996190    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:42:16.996194    3116 out.go:304] Setting ErrFile to fd 2...
	I0722 03:42:16.996197    3116 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:42:16.996370    3116 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 03:42:16.997879    3116 out.go:298] Setting JSON to false
	I0722 03:42:17.020698    3116 start.go:129] hostinfo: {"hostname":"MacOS-Agent-2.local","uptime":2506,"bootTime":1721642431,"procs":488,"os":"darwin","platform":"darwin","platformFamily":"Standalone Workstation","platformVersion":"14.5","kernelVersion":"23.5.0","kernelArch":"x86_64","virtualizationSystem":"","virtualizationRole":"","hostId":"2965c349-98a5-5970-aaa9-9eedd3ae5959"}
	W0722 03:42:17.020800    3116 start.go:137] gopshost.Virtualization returned error: not implemented yet
	I0722 03:42:17.041712    3116 out.go:177] * [functional-963000] minikube v1.33.1 sur Darwin 14.5
	I0722 03:42:17.063002    3116 out.go:177]   - MINIKUBE_LOCATION=19313
	I0722 03:42:17.063103    3116 notify.go:220] Checking for updates...
	I0722 03:42:17.106492    3116 out.go:177]   - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	I0722 03:42:17.127956    3116 out.go:177]   - MINIKUBE_BIN=out/minikube-darwin-amd64
	I0722 03:42:17.148815    3116 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0722 03:42:17.169713    3116 out.go:177]   - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	I0722 03:42:17.190721    3116 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0722 03:42:17.212745    3116 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:42:17.213451    3116 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:42:17.213523    3116 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:42:17.223093    3116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50785
	I0722 03:42:17.223454    3116 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:42:17.223883    3116 main.go:141] libmachine: Using API Version  1
	I0722 03:42:17.223893    3116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:42:17.224105    3116 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:42:17.224219    3116 main.go:141] libmachine: (functional-963000) Calling .DriverName
	I0722 03:42:17.224429    3116 driver.go:392] Setting default libvirt URI to qemu:///system
	I0722 03:42:17.224707    3116 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:42:17.224733    3116 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:42:17.232996    3116 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50787
	I0722 03:42:17.233351    3116 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:42:17.233718    3116 main.go:141] libmachine: Using API Version  1
	I0722 03:42:17.233735    3116 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:42:17.233927    3116 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:42:17.234046    3116 main.go:141] libmachine: (functional-963000) Calling .DriverName
	I0722 03:42:17.262798    3116 out.go:177] * Utilisation du pilote hyperkit basé sur le profil existant
	I0722 03:42:17.304796    3116 start.go:297] selected driver: hyperkit
	I0722 03:42:17.304816    3116 start.go:901] validating driver "hyperkit" against &{Name:functional-963000 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/19298/minikube-v1.33.1-1721324531-19298-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.44-1721324606-19298@sha256:1c495b056df42bd3fd9a5c30d049e1802f9ed73a342611781f1ccc3c3853953f Memory:4000 CPUs:2 DiskSize:20000 Driver:hyperkit HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.30.3 ClusterName:functional-963000 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:docker CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.169.0.4 Port:8441 KubernetesVersion:v1.30.3 ContainerRuntime:docker ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/Users:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0722 03:42:17.304955    3116 start.go:912] status for hyperkit: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0722 03:42:17.328831    3116 out.go:177] 
	W0722 03:42:17.349817    3116 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0722 03:42:17.370724    3116 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.46s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 status
functional_test.go:856: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-963000 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
functional_test.go:1631: (dbg) Run:  kubectl --context functional-963000 expose deployment hello-node-connect --type=NodePort --port=8080
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-57b4589c47-l58hg" [0204a9f6-a409-4821-a529-9d984096b330] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-57b4589c47-l58hg" [0204a9f6-a409-4821-a529-9d984096b330] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 8.005287027s
functional_test.go:1645: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.169.0.4:30864
functional_test.go:1671: http://192.169.0.4:30864: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-57b4589c47-l58hg

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.169.0.4:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.169.0.4:30864
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (8.55s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 addons list
functional_test.go:1698: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (29.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [d0747ab3-aba2-4d98-9538-0649bd9ffc96] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 5.004974739s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-963000 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-963000 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [de1693ae-fa66-4787-92da-e0b992327a89] Pending
helpers_test.go:344: "sp-pod" [de1693ae-fa66-4787-92da-e0b992327a89] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [de1693ae-fa66-4787-92da-e0b992327a89] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 14.004134324s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-963000 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-963000 delete -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [40b434c7-6f9e-42cf-bc51-32e815965b4f] Pending
helpers_test.go:344: "sp-pod" [40b434c7-6f9e-42cf-bc51-32e815965b4f] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [40b434c7-6f9e-42cf-bc51-32e815965b4f] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 9.003072878s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-963000 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (29.61s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.29s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh -n functional-963000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 cp functional-963000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelCpCmd1440922303/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh -n functional-963000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh -n functional-963000 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (24.3s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-963000 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-64454c8b5c-svk4f" [4178c5db-c29b-4ba2-b5b6-7689d4d2a311] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-64454c8b5c-svk4f" [4178c5db-c29b-4ba2-b5b6-7689d4d2a311] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 21.00283218s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-963000 exec mysql-64454c8b5c-svk4f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-963000 exec mysql-64454c8b5c-svk4f -- mysql -ppassword -e "show databases;": exit status 1 (158.220387ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 1045 (28000): Access denied for user 'root'@'localhost' (using password: YES)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-963000 exec mysql-64454c8b5c-svk4f -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-963000 exec mysql-64454c8b5c-svk4f -- mysql -ppassword -e "show databases;": exit status 1 (120.896021ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-963000 exec mysql-64454c8b5c-svk4f -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (24.30s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/1637/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo cat /etc/test/nested/copy/1637/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.08s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/1637.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo cat /etc/ssl/certs/1637.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/1637.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo cat /usr/share/ca-certificates/1637.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo cat /etc/ssl/certs/51391683.0"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/16372.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo cat /etc/ssl/certs/16372.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/16372.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo cat /usr/share/ca-certificates/16372.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.08s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-963000 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo systemctl is-active crio"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 ssh "sudo systemctl is-active crio": exit status 1 (163.257327ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-darwin-amd64 license
--- PASS: TestFunctional/parallel/License (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.1s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 version --short
--- PASS: TestFunctional/parallel/Version/short (0.10s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.49s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.49s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-963000 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.30.3
registry.k8s.io/kube-proxy:v1.30.3
registry.k8s.io/kube-controller-manager:v1.30.3
registry.k8s.io/kube-apiserver:v1.30.3
registry.k8s.io/etcd:3.5.12-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.11.1
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
docker.io/library/nginx:latest
docker.io/library/nginx:alpine
docker.io/library/mysql:5.7
docker.io/library/minikube-local-cache-test:functional-963000
docker.io/kicbase/echo-server:functional-963000
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-963000 image ls --format short --alsologtostderr:
I0722 03:42:19.319076    3149 out.go:291] Setting OutFile to fd 1 ...
I0722 03:42:19.319325    3149 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:19.319331    3149 out.go:304] Setting ErrFile to fd 2...
I0722 03:42:19.319335    3149 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:19.319513    3149 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
I0722 03:42:19.320098    3149 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:19.320192    3149 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:19.320534    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:19.320578    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:19.328971    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50840
I0722 03:42:19.329397    3149 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:19.329823    3149 main.go:141] libmachine: Using API Version  1
I0722 03:42:19.329858    3149 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:19.330098    3149 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:19.330235    3149 main.go:141] libmachine: (functional-963000) Calling .GetState
I0722 03:42:19.330328    3149 main.go:141] libmachine: (functional-963000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0722 03:42:19.330415    3149 main.go:141] libmachine: (functional-963000) DBG | hyperkit pid from json: 2422
I0722 03:42:19.331720    3149 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:19.331747    3149 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:19.340341    3149 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50842
I0722 03:42:19.340687    3149 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:19.341012    3149 main.go:141] libmachine: Using API Version  1
I0722 03:42:19.341022    3149 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:19.341206    3149 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:19.341328    3149 main.go:141] libmachine: (functional-963000) Calling .DriverName
I0722 03:42:19.341472    3149 ssh_runner.go:195] Run: systemctl --version
I0722 03:42:19.341499    3149 main.go:141] libmachine: (functional-963000) Calling .GetSSHHostname
I0722 03:42:19.341585    3149 main.go:141] libmachine: (functional-963000) Calling .GetSSHPort
I0722 03:42:19.341661    3149 main.go:141] libmachine: (functional-963000) Calling .GetSSHKeyPath
I0722 03:42:19.341735    3149 main.go:141] libmachine: (functional-963000) Calling .GetSSHUsername
I0722 03:42:19.341819    3149 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/functional-963000/id_rsa Username:docker}
I0722 03:42:19.376169    3149 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0722 03:42:19.395362    3149 main.go:141] libmachine: Making call to close driver server
I0722 03:42:19.395376    3149 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:19.395522    3149 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:19.395533    3149 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 03:42:19.395540    3149 main.go:141] libmachine: Making call to close driver server
I0722 03:42:19.395542    3149 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:19.395546    3149 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:19.395720    3149 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:19.395752    3149 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:19.395765    3149 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-963000 image ls --format table --alsologtostderr:
|---------------------------------------------|-------------------|---------------|--------|
|                    Image                    |        Tag        |   Image ID    |  Size  |
|---------------------------------------------|-------------------|---------------|--------|
| docker.io/library/minikube-local-cache-test | functional-963000 | 67462a0fa6225 | 30B    |
| registry.k8s.io/kube-scheduler              | v1.30.3           | 3edc18e7b7672 | 62MB   |
| gcr.io/k8s-minikube/busybox                 | 1.28.4-glibc      | 56cc512116c8f | 4.4MB  |
| registry.k8s.io/pause                       | 3.1               | da86e6ba6ca19 | 742kB  |
| docker.io/localhost/my-image                | functional-963000 | b54a19808ab7f | 1.24MB |
| registry.k8s.io/etcd                        | 3.5.12-0          | 3861cfcd7c04c | 149MB  |
| docker.io/kicbase/echo-server               | functional-963000 | 9056ab77afb8e | 4.94MB |
| registry.k8s.io/echoserver                  | 1.8               | 82e4c8a736a4f | 95.4MB |
| docker.io/library/nginx                     | latest            | fffffc90d343c | 188MB  |
| docker.io/library/nginx                     | alpine            | 099a2d701db1f | 43.2MB |
| docker.io/library/mysql                     | 5.7               | 5107333e08a87 | 501MB  |
| registry.k8s.io/coredns/coredns             | v1.11.1           | cbb01a7bd410d | 59.8MB |
| registry.k8s.io/pause                       | 3.9               | e6f1816883972 | 744kB  |
| gcr.io/k8s-minikube/storage-provisioner     | v5                | 6e38f40d628db | 31.5MB |
| registry.k8s.io/kube-apiserver              | v1.30.3           | 1f6d574d502f3 | 117MB  |
| registry.k8s.io/kube-controller-manager     | v1.30.3           | 76932a3b37d7e | 111MB  |
| registry.k8s.io/kube-proxy                  | v1.30.3           | 55bb025d2cfa5 | 84.7MB |
| gcr.io/k8s-minikube/busybox                 | latest            | beae173ccac6a | 1.24MB |
| registry.k8s.io/pause                       | 3.3               | 0184c1613d929 | 683kB  |
| registry.k8s.io/pause                       | latest            | 350b164e7ae1d | 240kB  |
|---------------------------------------------|-------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-963000 image ls --format table --alsologtostderr:
I0722 03:42:21.724606    3174 out.go:291] Setting OutFile to fd 1 ...
I0722 03:42:21.724900    3174 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:21.724906    3174 out.go:304] Setting ErrFile to fd 2...
I0722 03:42:21.724909    3174 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:21.725111    3174 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
I0722 03:42:21.725736    3174 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:21.725837    3174 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:21.726187    3174 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:21.726243    3174 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:21.735314    3174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50878
I0722 03:42:21.735828    3174 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:21.736308    3174 main.go:141] libmachine: Using API Version  1
I0722 03:42:21.736344    3174 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:21.736623    3174 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:21.736744    3174 main.go:141] libmachine: (functional-963000) Calling .GetState
I0722 03:42:21.736875    3174 main.go:141] libmachine: (functional-963000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0722 03:42:21.737000    3174 main.go:141] libmachine: (functional-963000) DBG | hyperkit pid from json: 2422
I0722 03:42:21.738412    3174 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:21.738438    3174 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:21.747780    3174 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50880
I0722 03:42:21.748160    3174 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:21.748511    3174 main.go:141] libmachine: Using API Version  1
I0722 03:42:21.748522    3174 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:21.748796    3174 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:21.748921    3174 main.go:141] libmachine: (functional-963000) Calling .DriverName
I0722 03:42:21.749104    3174 ssh_runner.go:195] Run: systemctl --version
I0722 03:42:21.749125    3174 main.go:141] libmachine: (functional-963000) Calling .GetSSHHostname
I0722 03:42:21.749214    3174 main.go:141] libmachine: (functional-963000) Calling .GetSSHPort
I0722 03:42:21.749306    3174 main.go:141] libmachine: (functional-963000) Calling .GetSSHKeyPath
I0722 03:42:21.749419    3174 main.go:141] libmachine: (functional-963000) Calling .GetSSHUsername
I0722 03:42:21.749515    3174 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/functional-963000/id_rsa Username:docker}
I0722 03:42:21.785008    3174 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0722 03:42:21.802088    3174 main.go:141] libmachine: Making call to close driver server
I0722 03:42:21.802097    3174 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:21.802254    3174 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:21.802262    3174 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 03:42:21.802283    3174 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:21.802291    3174 main.go:141] libmachine: Making call to close driver server
I0722 03:42:21.802297    3174 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:21.802429    3174 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:21.802430    3174 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:21.802448    3174 main.go:141] libmachine: Making call to close connection to plugin binary
2024/07/22 03:42:28 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-963000 image ls --format json --alsologtostderr:
[{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.9"],"size":"744000"},{"id":"9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30","repoDigests":[],"repoTags":["docker.io/kicbase/echo-server:functional-963000"],"size":"4940000"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31500000"},{"id":"76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e","repoDigests":[],"repoTags":["registry.k8s.io/kube-controller-manager:v1.30.3"],"size":"111000000"},{"id":"099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233","repoDigests":[],"repoTags":["docker.io/library/nginx:alpine"],"size":"43200000"},{"id":"3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899","repoDigests":[],"repoTags":["registry.k8s.io/etcd:3.5.12-0"],"size":"149000000"},{"id":"cbb01a7bd410dc08ba382018ab909a674fb0e48
687f0c00797ed5bc34fcc6bb4","repoDigests":[],"repoTags":["registry.k8s.io/coredns/coredns:v1.11.1"],"size":"59800000"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":[],"repoTags":["registry.k8s.io/pause:latest"],"size":"240000"},{"id":"1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d","repoDigests":[],"repoTags":["registry.k8s.io/kube-apiserver:v1.30.3"],"size":"117000000"},{"id":"55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1","repoDigests":[],"repoTags":["registry.k8s.io/kube-proxy:v1.30.3"],"size":"84700000"},{"id":"fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c","repoDigests":[],"repoTags":["docker.io/library/nginx:latest"],"size":"188000000"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.3"],"size":"683000"},{"id":"67462a0fa6225708d6c58547cc24b26f0c5c9b265883eeeaef9cf5937128bbfa","repoDigests":[],"repoTags":["docker.io/library/minikub
e-local-cache-test:functional-963000"],"size":"30"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":[],"repoTags":["docker.io/library/mysql:5.7"],"size":"501000000"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":[],"repoTags":["registry.k8s.io/pause:3.1"],"size":"742000"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":[],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"95400000"},{"id":"b54a19808ab7febae6811d95f17c5f94ffe040ad29116920fadd5d6aabc9d2a7","repoDigests":[],"repoTags":["docker.io/localhost/my-image:functional-963000"],"size":"1240000"},{"id":"3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2","repoDigests":[],"repoTags":["registry.k8s.io/kube-scheduler:v1.30.3"],"size":"62000000"},{"id":"beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:latest"],"size":"1240000"},{"id":"56cc512116c8f894f1
1ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":[],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4400000"}]
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-963000 image ls --format json --alsologtostderr:
I0722 03:42:21.545451    3170 out.go:291] Setting OutFile to fd 1 ...
I0722 03:42:21.545648    3170 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:21.545653    3170 out.go:304] Setting ErrFile to fd 2...
I0722 03:42:21.545657    3170 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:21.545847    3170 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
I0722 03:42:21.546541    3170 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:21.546639    3170 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:21.546989    3170 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:21.547035    3170 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:21.555296    3170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50872
I0722 03:42:21.555760    3170 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:21.556182    3170 main.go:141] libmachine: Using API Version  1
I0722 03:42:21.556192    3170 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:21.556405    3170 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:21.556508    3170 main.go:141] libmachine: (functional-963000) Calling .GetState
I0722 03:42:21.556586    3170 main.go:141] libmachine: (functional-963000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0722 03:42:21.556655    3170 main.go:141] libmachine: (functional-963000) DBG | hyperkit pid from json: 2422
I0722 03:42:21.557925    3170 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:21.557945    3170 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:21.566282    3170 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50874
I0722 03:42:21.566641    3170 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:21.566970    3170 main.go:141] libmachine: Using API Version  1
I0722 03:42:21.566982    3170 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:21.567202    3170 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:21.567317    3170 main.go:141] libmachine: (functional-963000) Calling .DriverName
I0722 03:42:21.567468    3170 ssh_runner.go:195] Run: systemctl --version
I0722 03:42:21.567492    3170 main.go:141] libmachine: (functional-963000) Calling .GetSSHHostname
I0722 03:42:21.567580    3170 main.go:141] libmachine: (functional-963000) Calling .GetSSHPort
I0722 03:42:21.567657    3170 main.go:141] libmachine: (functional-963000) Calling .GetSSHKeyPath
I0722 03:42:21.567737    3170 main.go:141] libmachine: (functional-963000) Calling .GetSSHUsername
I0722 03:42:21.567818    3170 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/functional-963000/id_rsa Username:docker}
I0722 03:42:21.604705    3170 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0722 03:42:21.642862    3170 main.go:141] libmachine: Making call to close driver server
I0722 03:42:21.642874    3170 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:21.643020    3170 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:21.643030    3170 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:21.643039    3170 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 03:42:21.643047    3170 main.go:141] libmachine: Making call to close driver server
I0722 03:42:21.643052    3170 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:21.643221    3170 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:21.643285    3170 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:21.643323    3170 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-963000 image ls --format yaml --alsologtostderr:
- id: 55bb025d2cfa592b9381d01e122e72a1ed4b29ca32f86b7d289d99da794784d1
repoDigests: []
repoTags:
- registry.k8s.io/kube-proxy:v1.30.3
size: "84700000"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.3
size: "683000"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests: []
repoTags:
- registry.k8s.io/echoserver:1.8
size: "95400000"
- id: 67462a0fa6225708d6c58547cc24b26f0c5c9b265883eeeaef9cf5937128bbfa
repoDigests: []
repoTags:
- docker.io/library/minikube-local-cache-test:functional-963000
size: "30"
- id: 1f6d574d502f3b61c851b1bbd4ef2a964ce4c70071dd8da556f2d490d36b095d
repoDigests: []
repoTags:
- registry.k8s.io/kube-apiserver:v1.30.3
size: "117000000"
- id: 76932a3b37d7eb138c8f47c9a2b4218f0466dd273badf856f2ce2f0277e15b5e
repoDigests: []
repoTags:
- registry.k8s.io/kube-controller-manager:v1.30.3
size: "111000000"
- id: 099a2d701db1f36dcc012419be04b7da299f48b4d2054fa8ab51e7764891e233
repoDigests: []
repoTags:
- docker.io/library/nginx:alpine
size: "43200000"
- id: 3861cfcd7c04ccac1f062788eca39487248527ef0c0cfd477a83d7691a75a899
repoDigests: []
repoTags:
- registry.k8s.io/etcd:3.5.12-0
size: "149000000"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests: []
repoTags:
- docker.io/library/mysql:5.7
size: "501000000"
- id: cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4
repoDigests: []
repoTags:
- registry.k8s.io/coredns/coredns:v1.11.1
size: "59800000"
- id: 9056ab77afb8e18e04303f11000a9d31b3f16b74c59475b899ae1b342d328d30
repoDigests: []
repoTags:
- docker.io/kicbase/echo-server:functional-963000
size: "4940000"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4400000"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests: []
repoTags:
- registry.k8s.io/pause:latest
size: "240000"
- id: 3edc18e7b76722eb2eb37a0858c09caacbd422d6e0cae4c2e5ce67bc9a9795e2
repoDigests: []
repoTags:
- registry.k8s.io/kube-scheduler:v1.30.3
size: "62000000"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.9
size: "744000"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests: []
repoTags:
- registry.k8s.io/pause:3.1
size: "742000"
- id: fffffc90d343cbcb01a5032edac86db5998c536cd0a366514121a45c6723765c
repoDigests: []
repoTags:
- docker.io/library/nginx:latest
size: "188000000"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests: []
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31500000"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-963000 image ls --format yaml --alsologtostderr:
I0722 03:42:19.476114    3153 out.go:291] Setting OutFile to fd 1 ...
I0722 03:42:19.476305    3153 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:19.476310    3153 out.go:304] Setting ErrFile to fd 2...
I0722 03:42:19.476313    3153 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:19.476491    3153 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
I0722 03:42:19.477076    3153 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:19.477170    3153 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:19.477514    3153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:19.477558    3153 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:19.485760    3153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50846
I0722 03:42:19.486152    3153 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:19.486558    3153 main.go:141] libmachine: Using API Version  1
I0722 03:42:19.486584    3153 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:19.486801    3153 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:19.486939    3153 main.go:141] libmachine: (functional-963000) Calling .GetState
I0722 03:42:19.487029    3153 main.go:141] libmachine: (functional-963000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0722 03:42:19.487103    3153 main.go:141] libmachine: (functional-963000) DBG | hyperkit pid from json: 2422
I0722 03:42:19.488379    3153 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:19.488401    3153 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:19.496637    3153 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50848
I0722 03:42:19.496992    3153 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:19.497331    3153 main.go:141] libmachine: Using API Version  1
I0722 03:42:19.497343    3153 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:19.497571    3153 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:19.497707    3153 main.go:141] libmachine: (functional-963000) Calling .DriverName
I0722 03:42:19.497865    3153 ssh_runner.go:195] Run: systemctl --version
I0722 03:42:19.497890    3153 main.go:141] libmachine: (functional-963000) Calling .GetSSHHostname
I0722 03:42:19.497976    3153 main.go:141] libmachine: (functional-963000) Calling .GetSSHPort
I0722 03:42:19.498057    3153 main.go:141] libmachine: (functional-963000) Calling .GetSSHKeyPath
I0722 03:42:19.498160    3153 main.go:141] libmachine: (functional-963000) Calling .GetSSHUsername
I0722 03:42:19.498246    3153 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/functional-963000/id_rsa Username:docker}
I0722 03:42:19.532151    3153 ssh_runner.go:195] Run: docker images --no-trunc --format "{{json .}}"
I0722 03:42:19.551601    3153 main.go:141] libmachine: Making call to close driver server
I0722 03:42:19.551616    3153 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:19.551759    3153 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:19.551760    3153 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:19.551770    3153 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 03:42:19.551779    3153 main.go:141] libmachine: Making call to close driver server
I0722 03:42:19.551783    3153 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:19.551931    3153 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:19.551958    3153 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:19.551968    3153 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (1.87s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 ssh pgrep buildkitd: exit status 1 (126.013185ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image build -t localhost/my-image:functional-963000 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-darwin-amd64 -p functional-963000 image build -t localhost/my-image:functional-963000 testdata/build --alsologtostderr: (1.582191211s)
functional_test.go:319: (dbg) Stdout: out/minikube-darwin-amd64 -p functional-963000 image build -t localhost/my-image:functional-963000 testdata/build --alsologtostderr:
Sending build context to Docker daemon  3.072kB

Step 1/3 : FROM gcr.io/k8s-minikube/busybox
latest: Pulling from k8s-minikube/busybox
5cc84ad355aa: Pulling fs layer
5cc84ad355aa: Verifying Checksum
5cc84ad355aa: Download complete
5cc84ad355aa: Pull complete
Digest: sha256:ca5ae90100d50772da31f3b5016209e25ad61972404e2ccd83d44f10dee7e79b
Status: Downloaded newer image for gcr.io/k8s-minikube/busybox:latest
---> beae173ccac6
Step 2/3 : RUN true
---> Running in 2e677f6756ef
---> Removed intermediate container 2e677f6756ef
---> 5a77d24517dc
Step 3/3 : ADD content.txt /
---> b54a19808ab7
Successfully built b54a19808ab7
Successfully tagged localhost/my-image:functional-963000
functional_test.go:322: (dbg) Stderr: out/minikube-darwin-amd64 -p functional-963000 image build -t localhost/my-image:functional-963000 testdata/build --alsologtostderr:
I0722 03:42:19.804003    3162 out.go:291] Setting OutFile to fd 1 ...
I0722 03:42:19.804282    3162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:19.804288    3162 out.go:304] Setting ErrFile to fd 2...
I0722 03:42:19.804292    3162 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0722 03:42:19.804451    3162 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
I0722 03:42:19.805016    3162 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:19.805712    3162 config.go:182] Loaded profile config "functional-963000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
I0722 03:42:19.806069    3162 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:19.806110    3162 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:19.814312    3162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50859
I0722 03:42:19.814737    3162 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:19.815166    3162 main.go:141] libmachine: Using API Version  1
I0722 03:42:19.815203    3162 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:19.815432    3162 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:19.815552    3162 main.go:141] libmachine: (functional-963000) Calling .GetState
I0722 03:42:19.815635    3162 main.go:141] libmachine: (functional-963000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
I0722 03:42:19.815709    3162 main.go:141] libmachine: (functional-963000) DBG | hyperkit pid from json: 2422
I0722 03:42:19.816984    3162 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
I0722 03:42:19.817008    3162 main.go:141] libmachine: Launching plugin server for driver hyperkit
I0722 03:42:19.825435    3162 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:50861
I0722 03:42:19.825772    3162 main.go:141] libmachine: () Calling .GetVersion
I0722 03:42:19.826124    3162 main.go:141] libmachine: Using API Version  1
I0722 03:42:19.826138    3162 main.go:141] libmachine: () Calling .SetConfigRaw
I0722 03:42:19.826354    3162 main.go:141] libmachine: () Calling .GetMachineName
I0722 03:42:19.826458    3162 main.go:141] libmachine: (functional-963000) Calling .DriverName
I0722 03:42:19.826604    3162 ssh_runner.go:195] Run: systemctl --version
I0722 03:42:19.826624    3162 main.go:141] libmachine: (functional-963000) Calling .GetSSHHostname
I0722 03:42:19.826690    3162 main.go:141] libmachine: (functional-963000) Calling .GetSSHPort
I0722 03:42:19.826771    3162 main.go:141] libmachine: (functional-963000) Calling .GetSSHKeyPath
I0722 03:42:19.826843    3162 main.go:141] libmachine: (functional-963000) Calling .GetSSHUsername
I0722 03:42:19.826944    3162 sshutil.go:53] new ssh client: &{IP:192.169.0.4 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/functional-963000/id_rsa Username:docker}
I0722 03:42:19.860237    3162 build_images.go:161] Building image from path: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1811561784.tar
I0722 03:42:19.860310    3162 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0722 03:42:19.868112    3162 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.1811561784.tar
I0722 03:42:19.871375    3162 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.1811561784.tar: stat -c "%s %y" /var/lib/minikube/build/build.1811561784.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.1811561784.tar': No such file or directory
I0722 03:42:19.871397    3162 ssh_runner.go:362] scp /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1811561784.tar --> /var/lib/minikube/build/build.1811561784.tar (3072 bytes)
I0722 03:42:19.892467    3162 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.1811561784
I0722 03:42:19.901185    3162 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.1811561784 -xf /var/lib/minikube/build/build.1811561784.tar
I0722 03:42:19.908494    3162 docker.go:360] Building image: /var/lib/minikube/build/build.1811561784
I0722 03:42:19.908554    3162 ssh_runner.go:195] Run: docker build -t localhost/my-image:functional-963000 /var/lib/minikube/build/build.1811561784
DEPRECATED: The legacy builder is deprecated and will be removed in a future release.
Install the buildx component to build images with BuildKit:
https://docs.docker.com/go/buildx/

                                                
                                                
I0722 03:42:21.287988    3162 ssh_runner.go:235] Completed: docker build -t localhost/my-image:functional-963000 /var/lib/minikube/build/build.1811561784: (1.379456964s)
I0722 03:42:21.288052    3162 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.1811561784
I0722 03:42:21.296422    3162 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.1811561784.tar
I0722 03:42:21.304182    3162 build_images.go:217] Built localhost/my-image:functional-963000 from /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/build.1811561784.tar
I0722 03:42:21.304207    3162 build_images.go:133] succeeded building to: functional-963000
I0722 03:42:21.304212    3162 build_images.go:134] failed building to: 
I0722 03:42:21.304224    3162 main.go:141] libmachine: Making call to close driver server
I0722 03:42:21.304231    3162 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:21.304392    3162 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:21.304403    3162 main.go:141] libmachine: Making call to close connection to plugin binary
I0722 03:42:21.304410    3162 main.go:141] libmachine: Making call to close driver server
I0722 03:42:21.304415    3162 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:21.304417    3162 main.go:141] libmachine: (functional-963000) Calling .Close
I0722 03:42:21.304600    3162 main.go:141] libmachine: (functional-963000) DBG | Closing plugin on server side
I0722 03:42:21.304604    3162 main.go:141] libmachine: Successfully made call to close driver server
I0722 03:42:21.304613    3162 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (1.87s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull docker.io/kicbase/echo-server:1.0
functional_test.go:341: (dbg) Done: docker pull docker.io/kicbase/echo-server:1.0: (1.700456316s)
functional_test.go:346: (dbg) Run:  docker tag docker.io/kicbase/echo-server:1.0 docker.io/kicbase/echo-server:functional-963000
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.73s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv/bash (0.63s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv/bash
functional_test.go:495: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-963000 docker-env) && out/minikube-darwin-amd64 status -p functional-963000"
functional_test.go:518: (dbg) Run:  /bin/bash -c "eval $(out/minikube-darwin-amd64 -p functional-963000 docker-env) && docker images"
--- PASS: TestFunctional/parallel/DockerEnv/bash (0.63s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.21s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.22s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image load --daemon docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (1.00s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.66s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image load --daemon docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (0.66s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull docker.io/kicbase/echo-server:latest
functional_test.go:239: (dbg) Run:  docker tag docker.io/kicbase/echo-server:latest docker.io/kicbase/echo-server:functional-963000
functional_test.go:244: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image load --daemon docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (1.44s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image save docker.io/kicbase/echo-server:functional-963000 /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image rm docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image load /Users/jenkins/workspace/echo-server-save.tar --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (0.79s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi docker.io/kicbase/echo-server:functional-963000
functional_test.go:423: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 image save --daemon docker.io/kicbase/echo-server:functional-963000 --alsologtostderr
functional_test.go:428: (dbg) Run:  docker image inspect docker.io/kicbase/echo-server:functional-963000
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (20.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-963000 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-963000 expose deployment hello-node --type=NodePort --port=8080
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-6d85cfcfd8-r72qv" [b5f97917-e548-4875-aa54-2e385d152d3d] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-6d85cfcfd8-r72qv" [b5f97917-e548-4875-aa54-2e385d152d3d] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 20.005000863s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (20.12s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-963000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:154: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-963000 tunnel --alsologtostderr]
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-963000 tunnel --alsologtostderr] ...
helpers_test.go:508: unable to kill pid 2849: os: process already finished
functional_test_tunnel_test.go:194: (dbg) stopping [out/minikube-darwin-amd64 -p functional-963000 tunnel --alsologtostderr] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.18s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:129: (dbg) daemon: [out/minikube-darwin-amd64 -p functional-963000 tunnel --alsologtostderr]
--- PASS: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup
functional_test_tunnel_test.go:212: (dbg) Run:  kubectl --context functional-963000 apply -f testdata/testsvc.yaml
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: waiting 4m0s for pods matching "run=nginx-svc" in namespace "default" ...
helpers_test.go:344: "nginx-svc" [1a1ea5d6-0c56-448f-9ba5-c4a1dab3919f] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx-svc" [1a1ea5d6-0c56-448f-9ba5-c4a1dab3919f] Running
functional_test_tunnel_test.go:216: (dbg) TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup: run=nginx-svc healthy within 9.004108769s
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/Setup (9.14s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 service list -o json
functional_test.go:1490: Took "389.686907ms" to run "out/minikube-darwin-amd64 -p functional-963000 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.169.0.4:30389
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.24s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.169.0.4:30389
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP
functional_test_tunnel_test.go:234: (dbg) Run:  kubectl --context functional-963000 get svc nginx-svc -o jsonpath={.status.loadBalancer.ingress[0].ip}
--- PASS: TestFunctional/parallel/TunnelCmd/serial/WaitService/IngressIP (0.05s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:299: tunnel at http://10.104.118.237 is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:319: (dbg) Run:  dig +time=5 +tries=3 @10.96.0.10 nginx-svc.default.svc.cluster.local. A
functional_test_tunnel_test.go:327: DNS resolution by dig for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.04s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:351: (dbg) Run:  dscacheutil -q host -a name nginx-svc.default.svc.cluster.local.
functional_test_tunnel_test.go:359: DNS resolution by dscacheutil for nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.03s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:424: tunnel at http://nginx-svc.default.svc.cluster.local. is working!
--- PASS: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.02s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:434: (dbg) stopping [out/minikube-darwin-amd64 -p functional-963000 tunnel --alsologtostderr] ...
--- PASS: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.14s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-darwin-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-darwin-amd64 profile list
functional_test.go:1311: Took "178.220989ms" to run "out/minikube-darwin-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-darwin-amd64 profile list -l
functional_test.go:1325: Took "77.785961ms" to run "out/minikube-darwin-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json
functional_test.go:1362: Took "180.710364ms" to run "out/minikube-darwin-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-darwin-amd64 profile list -o json --light
functional_test.go:1375: Took "77.495459ms" to run "out/minikube-darwin-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port626684820/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1721644926702694000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port626684820/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1721644926702694000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port626684820/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1721644926702694000" to /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port626684820/001/test-1721644926702694000
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (157.505952ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Jul 22 10:42 created-by-test
-rw-r--r-- 1 docker docker 24 Jul 22 10:42 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Jul 22 10:42 test-1721644926702694000
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh cat /mount-9p/test-1721644926702694000
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-963000 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [0cd7111b-3da2-4d1d-8c0c-6283f0162d1d] Pending
helpers_test.go:344: "busybox-mount" [0cd7111b-3da2-4d1d-8c0c-6283f0162d1d] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
helpers_test.go:344: "busybox-mount" [0cd7111b-3da2-4d1d-8c0c-6283f0162d1d] Pending / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [0cd7111b-3da2-4d1d-8c0c-6283f0162d1d] Succeeded / Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.004681845s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-963000 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdany-port626684820/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.19s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (1.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1432666987/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (154.041213ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh -- ls -la /mount-9p
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1432666987/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 ssh "sudo umount -f /mount-9p": exit status 1 (124.708709ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-darwin-amd64 -p functional-963000 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdspecific-port1432666987/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (1.32s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2279727993/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2279727993/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2279727993/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T" /mount1: exit status 1 (156.76493ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T" /mount1: exit status 1 (192.546739ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-darwin-amd64 -p functional-963000 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-darwin-amd64 mount -p functional-963000 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2279727993/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2279727993/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-darwin-amd64 mount -p functional-963000 /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestFunctionalparallelMountCmdVerifyCleanup2279727993/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.73s)

                                                
                                    
x
+
TestFunctional/delete_echo-server_images (0.04s)

                                                
                                                
=== RUN   TestFunctional/delete_echo-server_images
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:1.0
functional_test.go:189: (dbg) Run:  docker rmi -f docker.io/kicbase/echo-server:functional-963000
--- PASS: TestFunctional/delete_echo-server_images (0.04s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-963000
--- PASS: TestFunctional/delete_my-image_image (0.02s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-963000
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StartCluster (203.83s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-090000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit 
E0722 03:42:46.626907    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:43:14.329812    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-darwin-amd64 start -p ha-090000 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=hyperkit : (3m23.455522284s)
ha_test.go:107: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/StartCluster (203.83s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeployApp (4.85s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-darwin-amd64 kubectl -p ha-090000 -- rollout status deployment/busybox: (2.570906508s)
ha_test.go:140: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2769d -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2tcf2 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-8n2c6 -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2769d -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2tcf2 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-8n2c6 -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2769d -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2tcf2 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-8n2c6 -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiControlPlane/serial/DeployApp (4.85s)

                                                
                                    
x
+
TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2769d -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2769d -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2tcf2 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-2tcf2 -- sh -c "ping -c 1 192.169.0.1"
ha_test.go:207: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-8n2c6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p ha-090000 -- exec busybox-fc5497c4f-8n2c6 -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiControlPlane/serial/PingHostFromPods (1.29s)

                                                
                                    
x
+
TestMultiControlPlane/serial/AddWorkerNode (164.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 node add -p ha-090000 -v=7 --alsologtostderr
E0722 03:46:22.404610    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:22.411012    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:22.421707    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:22.443431    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:22.483605    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:22.564664    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:22.726081    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:23.046480    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:23.687795    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:24.969569    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:27.529654    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:32.650216    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:46:42.891041    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:47:03.371399    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:47:44.330462    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:47:46.618897    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-darwin-amd64 node add -p ha-090000 -v=7 --alsologtostderr: (2m43.803487714s)
ha_test.go:234: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr
--- PASS: TestMultiControlPlane/serial/AddWorkerNode (164.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-090000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiControlPlane/serial/NodeLabels (0.05s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterClusterStart (0.35s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterClusterStart (0.35s)

                                                
                                    
x
+
TestMultiControlPlane/serial/CopyFile (9.23s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp testdata/cp-test.txt ha-090000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3050769313/001/cp-test_ha-090000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000:/home/docker/cp-test.txt ha-090000-m02:/home/docker/cp-test_ha-090000_ha-090000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m02 "sudo cat /home/docker/cp-test_ha-090000_ha-090000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000:/home/docker/cp-test.txt ha-090000-m03:/home/docker/cp-test_ha-090000_ha-090000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m03 "sudo cat /home/docker/cp-test_ha-090000_ha-090000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000:/home/docker/cp-test.txt ha-090000-m04:/home/docker/cp-test_ha-090000_ha-090000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m04 "sudo cat /home/docker/cp-test_ha-090000_ha-090000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp testdata/cp-test.txt ha-090000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3050769313/001/cp-test_ha-090000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m02:/home/docker/cp-test.txt ha-090000:/home/docker/cp-test_ha-090000-m02_ha-090000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000 "sudo cat /home/docker/cp-test_ha-090000-m02_ha-090000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m02:/home/docker/cp-test.txt ha-090000-m03:/home/docker/cp-test_ha-090000-m02_ha-090000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m03 "sudo cat /home/docker/cp-test_ha-090000-m02_ha-090000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m02:/home/docker/cp-test.txt ha-090000-m04:/home/docker/cp-test_ha-090000-m02_ha-090000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m04 "sudo cat /home/docker/cp-test_ha-090000-m02_ha-090000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp testdata/cp-test.txt ha-090000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3050769313/001/cp-test_ha-090000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m03:/home/docker/cp-test.txt ha-090000:/home/docker/cp-test_ha-090000-m03_ha-090000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000 "sudo cat /home/docker/cp-test_ha-090000-m03_ha-090000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m03:/home/docker/cp-test.txt ha-090000-m02:/home/docker/cp-test_ha-090000-m03_ha-090000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m02 "sudo cat /home/docker/cp-test_ha-090000-m03_ha-090000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m03:/home/docker/cp-test.txt ha-090000-m04:/home/docker/cp-test_ha-090000-m03_ha-090000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m04 "sudo cat /home/docker/cp-test_ha-090000-m03_ha-090000-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp testdata/cp-test.txt ha-090000-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiControlPlaneserialCopyFile3050769313/001/cp-test_ha-090000-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt ha-090000:/home/docker/cp-test_ha-090000-m04_ha-090000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000 "sudo cat /home/docker/cp-test_ha-090000-m04_ha-090000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt ha-090000-m02:/home/docker/cp-test_ha-090000-m04_ha-090000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m02 "sudo cat /home/docker/cp-test_ha-090000-m04_ha-090000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 cp ha-090000-m04:/home/docker/cp-test.txt ha-090000-m03:/home/docker/cp-test_ha-090000-m04_ha-090000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 ssh -n ha-090000-m03 "sudo cat /home/docker/cp-test_ha-090000-m04_ha-090000-m03.txt"
--- PASS: TestMultiControlPlane/serial/CopyFile (9.23s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopSecondaryNode (8.7s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 node stop m02 -v=7 --alsologtostderr
E0722 03:49:06.249878    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
ha_test.go:363: (dbg) Done: out/minikube-darwin-amd64 -p ha-090000 node stop m02 -v=7 --alsologtostderr: (8.340260343s)
ha_test.go:369: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr
ha_test.go:369: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr: exit status 7 (357.623423ms)

                                                
                                                
-- stdout --
	ha-090000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-090000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-090000-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-090000-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:49:06.342923    3663 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:49:06.343223    3663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:49:06.343229    3663 out.go:304] Setting ErrFile to fd 2...
	I0722 03:49:06.343232    3663 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:49:06.343401    3663 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 03:49:06.343583    3663 out.go:298] Setting JSON to false
	I0722 03:49:06.343603    3663 mustload.go:65] Loading cluster: ha-090000
	I0722 03:49:06.343651    3663 notify.go:220] Checking for updates...
	I0722 03:49:06.343902    3663 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:49:06.343918    3663 status.go:255] checking status of ha-090000 ...
	I0722 03:49:06.344298    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.344340    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.353199    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51611
	I0722 03:49:06.353542    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.353977    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.353989    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.354212    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.354321    3663 main.go:141] libmachine: (ha-090000) Calling .GetState
	I0722 03:49:06.354390    3663 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:49:06.354470    3663 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3202
	I0722 03:49:06.355463    3663 status.go:330] ha-090000 host status = "Running" (err=<nil>)
	I0722 03:49:06.355486    3663 host.go:66] Checking if "ha-090000" exists ...
	I0722 03:49:06.355769    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.355795    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.364134    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51613
	I0722 03:49:06.364498    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.364858    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.364873    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.365075    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.365196    3663 main.go:141] libmachine: (ha-090000) Calling .GetIP
	I0722 03:49:06.365288    3663 host.go:66] Checking if "ha-090000" exists ...
	I0722 03:49:06.365554    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.365580    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.373970    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51615
	I0722 03:49:06.374277    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.374592    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.374608    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.374831    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.374941    3663 main.go:141] libmachine: (ha-090000) Calling .DriverName
	I0722 03:49:06.375084    3663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 03:49:06.375105    3663 main.go:141] libmachine: (ha-090000) Calling .GetSSHHostname
	I0722 03:49:06.375182    3663 main.go:141] libmachine: (ha-090000) Calling .GetSSHPort
	I0722 03:49:06.375256    3663 main.go:141] libmachine: (ha-090000) Calling .GetSSHKeyPath
	I0722 03:49:06.375336    3663 main.go:141] libmachine: (ha-090000) Calling .GetSSHUsername
	I0722 03:49:06.375415    3663 sshutil.go:53] new ssh client: &{IP:192.169.0.5 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000/id_rsa Username:docker}
	I0722 03:49:06.412141    3663 ssh_runner.go:195] Run: systemctl --version
	I0722 03:49:06.416458    3663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 03:49:06.428802    3663 kubeconfig.go:125] found "ha-090000" server: "https://192.169.0.254:8443"
	I0722 03:49:06.428825    3663 api_server.go:166] Checking apiserver status ...
	I0722 03:49:06.428870    3663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 03:49:06.441663    3663 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1944/cgroup
	W0722 03:49:06.452925    3663 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1944/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 03:49:06.452983    3663 ssh_runner.go:195] Run: ls
	I0722 03:49:06.456278    3663 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0722 03:49:06.459489    3663 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0722 03:49:06.459501    3663 status.go:422] ha-090000 apiserver status = Running (err=<nil>)
	I0722 03:49:06.459511    3663 status.go:257] ha-090000 status: &{Name:ha-090000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 03:49:06.459522    3663 status.go:255] checking status of ha-090000-m02 ...
	I0722 03:49:06.459773    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.459794    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.468358    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51619
	I0722 03:49:06.468698    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.469014    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.469023    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.469236    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.469334    3663 main.go:141] libmachine: (ha-090000-m02) Calling .GetState
	I0722 03:49:06.469421    3663 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:49:06.469495    3663 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid from json: 3215
	I0722 03:49:06.470438    3663 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid 3215 missing from process table
	I0722 03:49:06.470488    3663 status.go:330] ha-090000-m02 host status = "Stopped" (err=<nil>)
	I0722 03:49:06.470500    3663 status.go:343] host is not running, skipping remaining checks
	I0722 03:49:06.470506    3663 status.go:257] ha-090000-m02 status: &{Name:ha-090000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 03:49:06.470519    3663 status.go:255] checking status of ha-090000-m03 ...
	I0722 03:49:06.470790    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.470813    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.479114    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51621
	I0722 03:49:06.479470    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.479818    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.479833    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.480049    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.480175    3663 main.go:141] libmachine: (ha-090000-m03) Calling .GetState
	I0722 03:49:06.480261    3663 main.go:141] libmachine: (ha-090000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:49:06.480368    3663 main.go:141] libmachine: (ha-090000-m03) DBG | hyperkit pid from json: 3231
	I0722 03:49:06.481347    3663 status.go:330] ha-090000-m03 host status = "Running" (err=<nil>)
	I0722 03:49:06.481359    3663 host.go:66] Checking if "ha-090000-m03" exists ...
	I0722 03:49:06.481613    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.481640    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.490075    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51623
	I0722 03:49:06.490424    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.490743    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.490776    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.490987    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.491097    3663 main.go:141] libmachine: (ha-090000-m03) Calling .GetIP
	I0722 03:49:06.491189    3663 host.go:66] Checking if "ha-090000-m03" exists ...
	I0722 03:49:06.491444    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.491475    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.499721    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51625
	I0722 03:49:06.500090    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.500412    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.500424    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.500636    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.500743    3663 main.go:141] libmachine: (ha-090000-m03) Calling .DriverName
	I0722 03:49:06.500866    3663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 03:49:06.500877    3663 main.go:141] libmachine: (ha-090000-m03) Calling .GetSSHHostname
	I0722 03:49:06.500951    3663 main.go:141] libmachine: (ha-090000-m03) Calling .GetSSHPort
	I0722 03:49:06.501066    3663 main.go:141] libmachine: (ha-090000-m03) Calling .GetSSHKeyPath
	I0722 03:49:06.501155    3663 main.go:141] libmachine: (ha-090000-m03) Calling .GetSSHUsername
	I0722 03:49:06.501231    3663 sshutil.go:53] new ssh client: &{IP:192.169.0.7 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m03/id_rsa Username:docker}
	I0722 03:49:06.534181    3663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 03:49:06.545920    3663 kubeconfig.go:125] found "ha-090000" server: "https://192.169.0.254:8443"
	I0722 03:49:06.545935    3663 api_server.go:166] Checking apiserver status ...
	I0722 03:49:06.545976    3663 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 03:49:06.557488    3663 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/2027/cgroup
	W0722 03:49:06.565509    3663 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/2027/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 03:49:06.565559    3663 ssh_runner.go:195] Run: ls
	I0722 03:49:06.568724    3663 api_server.go:253] Checking apiserver healthz at https://192.169.0.254:8443/healthz ...
	I0722 03:49:06.572838    3663 api_server.go:279] https://192.169.0.254:8443/healthz returned 200:
	ok
	I0722 03:49:06.572852    3663 status.go:422] ha-090000-m03 apiserver status = Running (err=<nil>)
	I0722 03:49:06.572860    3663 status.go:257] ha-090000-m03 status: &{Name:ha-090000-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 03:49:06.572871    3663 status.go:255] checking status of ha-090000-m04 ...
	I0722 03:49:06.573149    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.573174    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.581832    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51629
	I0722 03:49:06.582180    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.582521    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.582534    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.582760    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.582886    3663 main.go:141] libmachine: (ha-090000-m04) Calling .GetState
	I0722 03:49:06.582970    3663 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:49:06.583051    3663 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid from json: 3325
	I0722 03:49:06.584055    3663 status.go:330] ha-090000-m04 host status = "Running" (err=<nil>)
	I0722 03:49:06.584065    3663 host.go:66] Checking if "ha-090000-m04" exists ...
	I0722 03:49:06.584331    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.584361    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.592847    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51631
	I0722 03:49:06.593179    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.593511    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.593529    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.593721    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.593829    3663 main.go:141] libmachine: (ha-090000-m04) Calling .GetIP
	I0722 03:49:06.593909    3663 host.go:66] Checking if "ha-090000-m04" exists ...
	I0722 03:49:06.594180    3663 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:49:06.594204    3663 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:49:06.602472    3663 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51633
	I0722 03:49:06.602799    3663 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:49:06.603131    3663 main.go:141] libmachine: Using API Version  1
	I0722 03:49:06.603146    3663 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:49:06.603365    3663 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:49:06.603476    3663 main.go:141] libmachine: (ha-090000-m04) Calling .DriverName
	I0722 03:49:06.603606    3663 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 03:49:06.603616    3663 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHHostname
	I0722 03:49:06.603694    3663 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHPort
	I0722 03:49:06.603787    3663 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHKeyPath
	I0722 03:49:06.603874    3663 main.go:141] libmachine: (ha-090000-m04) Calling .GetSSHUsername
	I0722 03:49:06.603947    3663 sshutil.go:53] new ssh client: &{IP:192.169.0.8 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/ha-090000-m04/id_rsa Username:docker}
	I0722 03:49:06.635351    3663 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 03:49:06.645655    3663 status.go:257] ha-090000-m04 status: &{Name:ha-090000-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopSecondaryNode (8.70s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterControlPlaneNodeStop (0.27s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartSecondaryNode (36.95s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 node start m02 -v=7 --alsologtostderr
ha_test.go:420: (dbg) Done: out/minikube-darwin-amd64 -p ha-090000 node start m02 -v=7 --alsologtostderr: (36.450326204s)
ha_test.go:428: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr
ha_test.go:448: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiControlPlane/serial/RestartSecondaryNode (36.95s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.34s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.34s)

                                                
                                    
x
+
TestMultiControlPlane/serial/RestartClusterKeepsNodes (296.38s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-090000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-darwin-amd64 stop -p ha-090000 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Done: out/minikube-darwin-amd64 stop -p ha-090000 -v=7 --alsologtostderr: (27.037061297s)
ha_test.go:467: (dbg) Run:  out/minikube-darwin-amd64 start -p ha-090000 --wait=true -v=7 --alsologtostderr
E0722 03:51:22.397013    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:51:50.086459    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 03:52:46.610417    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 03:54:09.673746    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-darwin-amd64 start -p ha-090000 --wait=true -v=7 --alsologtostderr: (4m29.224657281s)
ha_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 node list -p ha-090000
--- PASS: TestMultiControlPlane/serial/RestartClusterKeepsNodes (296.38s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DeleteSecondaryNode (8.14s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-darwin-amd64 -p ha-090000 node delete m03 -v=7 --alsologtostderr: (7.698743407s)
ha_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr
ha_test.go:511: (dbg) Run:  kubectl get nodes
ha_test.go:519: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiControlPlane/serial/DeleteSecondaryNode (8.14s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.26s)

                                                
                                    
x
+
TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 stop -v=7 --alsologtostderr
ha_test.go:531: (dbg) Done: out/minikube-darwin-amd64 -p ha-090000 stop -v=7 --alsologtostderr: (24.886069598s)
ha_test.go:537: (dbg) Run:  out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p ha-090000 status -v=7 --alsologtostderr: exit status 7 (89.161462ms)

                                                
                                                
-- stdout --
	ha-090000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-090000-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-090000-m04
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 03:55:13.912569    3906 out.go:291] Setting OutFile to fd 1 ...
	I0722 03:55:13.912851    3906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:13.912857    3906 out.go:304] Setting ErrFile to fd 2...
	I0722 03:55:13.912860    3906 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 03:55:13.913039    3906 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 03:55:13.913207    3906 out.go:298] Setting JSON to false
	I0722 03:55:13.913230    3906 mustload.go:65] Loading cluster: ha-090000
	I0722 03:55:13.913274    3906 notify.go:220] Checking for updates...
	I0722 03:55:13.913527    3906 config.go:182] Loaded profile config "ha-090000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 03:55:13.913543    3906 status.go:255] checking status of ha-090000 ...
	I0722 03:55:13.913905    3906 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:13.913960    3906 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:13.922874    3906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51941
	I0722 03:55:13.923191    3906 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:13.923635    3906 main.go:141] libmachine: Using API Version  1
	I0722 03:55:13.923656    3906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:13.923882    3906 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:13.923985    3906 main.go:141] libmachine: (ha-090000) Calling .GetState
	I0722 03:55:13.924078    3906 main.go:141] libmachine: (ha-090000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:13.924146    3906 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid from json: 3743
	I0722 03:55:13.925008    3906 main.go:141] libmachine: (ha-090000) DBG | hyperkit pid 3743 missing from process table
	I0722 03:55:13.925087    3906 status.go:330] ha-090000 host status = "Stopped" (err=<nil>)
	I0722 03:55:13.925100    3906 status.go:343] host is not running, skipping remaining checks
	I0722 03:55:13.925107    3906 status.go:257] ha-090000 status: &{Name:ha-090000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 03:55:13.925130    3906 status.go:255] checking status of ha-090000-m02 ...
	I0722 03:55:13.925400    3906 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:13.925453    3906 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:13.933546    3906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51943
	I0722 03:55:13.933883    3906 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:13.934232    3906 main.go:141] libmachine: Using API Version  1
	I0722 03:55:13.934247    3906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:13.934458    3906 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:13.934583    3906 main.go:141] libmachine: (ha-090000-m02) Calling .GetState
	I0722 03:55:13.934682    3906 main.go:141] libmachine: (ha-090000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:13.934781    3906 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid from json: 3753
	I0722 03:55:13.935648    3906 main.go:141] libmachine: (ha-090000-m02) DBG | hyperkit pid 3753 missing from process table
	I0722 03:55:13.935684    3906 status.go:330] ha-090000-m02 host status = "Stopped" (err=<nil>)
	I0722 03:55:13.935691    3906 status.go:343] host is not running, skipping remaining checks
	I0722 03:55:13.935699    3906 status.go:257] ha-090000-m02 status: &{Name:ha-090000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 03:55:13.935712    3906 status.go:255] checking status of ha-090000-m04 ...
	I0722 03:55:13.935964    3906 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 03:55:13.935987    3906 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 03:55:13.944971    3906 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:51945
	I0722 03:55:13.945303    3906 main.go:141] libmachine: () Calling .GetVersion
	I0722 03:55:13.945606    3906 main.go:141] libmachine: Using API Version  1
	I0722 03:55:13.945616    3906 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 03:55:13.945804    3906 main.go:141] libmachine: () Calling .GetMachineName
	I0722 03:55:13.945922    3906 main.go:141] libmachine: (ha-090000-m04) Calling .GetState
	I0722 03:55:13.945997    3906 main.go:141] libmachine: (ha-090000-m04) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 03:55:13.946065    3906 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid from json: 3802
	I0722 03:55:13.946976    3906 main.go:141] libmachine: (ha-090000-m04) DBG | hyperkit pid 3802 missing from process table
	I0722 03:55:13.947004    3906 status.go:330] ha-090000-m04 host status = "Stopped" (err=<nil>)
	I0722 03:55:13.947009    3906 status.go:343] host is not running, skipping remaining checks
	I0722 03:55:13.947015    3906 status.go:257] ha-090000-m04 status: &{Name:ha-090000-m04 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiControlPlane/serial/StopCluster (24.98s)

                                                
                                    
x
+
TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.25s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/DegradedAfterClusterRestart (0.25s)

                                                
                                    
x
+
TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                                
=== RUN   TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.33s)

                                                
                                    
x
+
TestImageBuild/serial/Setup (40.53s)

                                                
                                                
=== RUN   TestImageBuild/serial/Setup
image_test.go:69: (dbg) Run:  out/minikube-darwin-amd64 start -p image-365000 --driver=hyperkit 
E0722 04:02:45.478306    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 04:02:46.644619    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
image_test.go:69: (dbg) Done: out/minikube-darwin-amd64 start -p image-365000 --driver=hyperkit : (40.530288103s)
--- PASS: TestImageBuild/serial/Setup (40.53s)

                                                
                                    
x
+
TestImageBuild/serial/NormalBuild (1.39s)

                                                
                                                
=== RUN   TestImageBuild/serial/NormalBuild
image_test.go:78: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-365000
image_test.go:78: (dbg) Done: out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal -p image-365000: (1.387824533s)
--- PASS: TestImageBuild/serial/NormalBuild (1.39s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithBuildArg (0.51s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithBuildArg
image_test.go:99: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest --build-opt=build-arg=ENV_A=test_env_str --build-opt=no-cache ./testdata/image-build/test-arg -p image-365000
--- PASS: TestImageBuild/serial/BuildWithBuildArg (0.51s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithDockerIgnore
image_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest ./testdata/image-build/test-normal --build-opt=no-cache -p image-365000
--- PASS: TestImageBuild/serial/BuildWithDockerIgnore (0.24s)

                                                
                                    
x
+
TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                                
=== RUN   TestImageBuild/serial/BuildWithSpecifiedDockerfile
image_test.go:88: (dbg) Run:  out/minikube-darwin-amd64 image build -t aaa:latest -f inner/Dockerfile ./testdata/image-build/test-f -p image-365000
--- PASS: TestImageBuild/serial/BuildWithSpecifiedDockerfile (0.22s)

                                                
                                    
x
+
TestJSONOutput/start/Command (55.17s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-915000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit 
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 start -p json-output-915000 --output=json --user=testUser --memory=2200 --wait=true --driver=hyperkit : (55.166838285s)
--- PASS: TestJSONOutput/start/Command (55.17s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 pause -p json-output-915000 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.46s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 unpause -p json-output-915000 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.46s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (8.34s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-darwin-amd64 stop -p json-output-915000 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-darwin-amd64 stop -p json-output-915000 --output=json --user=testUser: (8.34289936s)
--- PASS: TestJSONOutput/stop/Command (8.34s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.57s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-darwin-amd64 start -p json-output-error-616000 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p json-output-error-616000 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (355.717003ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"328a5c6e-ea41-4731-8e21-34b0783a6e29","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-616000] minikube v1.33.1 on Darwin 14.5","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"a282487c-46ac-4699-a241-583b92f916b4","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=19313"}}
	{"specversion":"1.0","id":"5d887ad2-e87c-42d0-aba7-ba2c38f64018","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig"}}
	{"specversion":"1.0","id":"358f9e34-9ab0-4c21-8d44-68eb29fb9650","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-darwin-amd64"}}
	{"specversion":"1.0","id":"6122aced-52e5-49ca-af89-c4f93a0de4a3","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"5f5d15d2-dca1-480e-a5b9-c13c30e46b5c","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube"}}
	{"specversion":"1.0","id":"882de7b6-950b-4dc2-ad82-6d450427c998","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"24b9b666-f30b-4e54-bd08-928ea3e4fcbe","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on darwin/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-616000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p json-output-error-616000
--- PASS: TestErrorJSONOutput (0.57s)

                                                
                                    
x
+
TestMainNoArgs (0.08s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-darwin-amd64
--- PASS: TestMainNoArgs (0.08s)

                                                
                                    
x
+
TestMinikubeProfile (91.01s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p first-378000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p first-378000 --driver=hyperkit : (39.82569215s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p second-380000 --driver=hyperkit 
minikube_profile_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p second-380000 --driver=hyperkit : (39.887585114s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile first-378000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-darwin-amd64 profile second-380000
minikube_profile_test.go:55: (dbg) Run:  out/minikube-darwin-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-380000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p second-380000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p second-380000: (5.300540722s)
helpers_test.go:175: Cleaning up "first-378000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p first-378000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p first-378000: (5.239487135s)
--- PASS: TestMinikubeProfile (91.01s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (21.53s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-darwin-amd64 start -p mount-start-1-572000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit 
mount_start_test.go:98: (dbg) Done: out/minikube-darwin-amd64 start -p mount-start-1-572000 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=hyperkit : (20.529552564s)
--- PASS: TestMountStart/serial/StartWithMountFirst (21.53s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (121.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-688000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit 
E0722 04:06:22.419092    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 04:07:46.634611    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-688000 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=hyperkit : (2m0.821774694s)
multinode_test.go:102: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (121.06s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-darwin-amd64 kubectl -p multinode-688000 -- rollout status deployment/busybox: (2.68084792s)
multinode_test.go:505: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-hqzlg -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-xm8dl -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-hqzlg -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-xm8dl -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-hqzlg -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-xm8dl -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.33s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-hqzlg -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-hqzlg -- sh -c "ping -c 1 192.169.0.1"
multinode_test.go:572: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-xm8dl -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-darwin-amd64 kubectl -p multinode-688000 -- exec busybox-fc5497c4f-xm8dl -- sh -c "ping -c 1 192.169.0.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.88s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (47.67s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-688000 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-darwin-amd64 node add -p multinode-688000 -v 3 --alsologtostderr: (47.35225434s)
multinode_test.go:127: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (47.67s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-688000 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.05s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.18s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.18s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (5.22s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp testdata/cp-test.txt multinode-688000:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp multinode-688000:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile2735164485/001/cp-test_multinode-688000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp multinode-688000:/home/docker/cp-test.txt multinode-688000-m02:/home/docker/cp-test_multinode-688000_multinode-688000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m02 "sudo cat /home/docker/cp-test_multinode-688000_multinode-688000-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp multinode-688000:/home/docker/cp-test.txt multinode-688000-m03:/home/docker/cp-test_multinode-688000_multinode-688000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m03 "sudo cat /home/docker/cp-test_multinode-688000_multinode-688000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp testdata/cp-test.txt multinode-688000-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp multinode-688000-m02:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile2735164485/001/cp-test_multinode-688000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp multinode-688000-m02:/home/docker/cp-test.txt multinode-688000:/home/docker/cp-test_multinode-688000-m02_multinode-688000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000 "sudo cat /home/docker/cp-test_multinode-688000-m02_multinode-688000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp multinode-688000-m02:/home/docker/cp-test.txt multinode-688000-m03:/home/docker/cp-test_multinode-688000-m02_multinode-688000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m03 "sudo cat /home/docker/cp-test_multinode-688000-m02_multinode-688000-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp testdata/cp-test.txt multinode-688000-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp multinode-688000-m03:/home/docker/cp-test.txt /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestMultiNodeserialCopyFile2735164485/001/cp-test_multinode-688000-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp multinode-688000-m03:/home/docker/cp-test.txt multinode-688000:/home/docker/cp-test_multinode-688000-m03_multinode-688000.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000 "sudo cat /home/docker/cp-test_multinode-688000-m03_multinode-688000.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 cp multinode-688000-m03:/home/docker/cp-test.txt multinode-688000-m02:/home/docker/cp-test_multinode-688000-m03_multinode-688000-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 ssh -n multinode-688000-m02 "sudo cat /home/docker/cp-test_multinode-688000-m03_multinode-688000-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (5.22s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (2.83s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-darwin-amd64 -p multinode-688000 node stop m03: (2.328329861s)
multinode_test.go:254: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-688000 status: exit status 7 (250.012833ms)

                                                
                                                
-- stdout --
	multinode-688000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-688000 status --alsologtostderr: exit status 7 (247.057732ms)

                                                
                                                
-- stdout --
	multinode-688000
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-688000-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-688000-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:09:07.947644    4927 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:09:07.947924    4927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:09:07.947930    4927 out.go:304] Setting ErrFile to fd 2...
	I0722 04:09:07.947934    4927 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:09:07.948098    4927 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 04:09:07.948288    4927 out.go:298] Setting JSON to false
	I0722 04:09:07.948310    4927 mustload.go:65] Loading cluster: multinode-688000
	I0722 04:09:07.948346    4927 notify.go:220] Checking for updates...
	I0722 04:09:07.948625    4927 config.go:182] Loaded profile config "multinode-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:09:07.948640    4927 status.go:255] checking status of multinode-688000 ...
	I0722 04:09:07.949000    4927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:09:07.949048    4927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:09:07.957605    4927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52977
	I0722 04:09:07.958073    4927 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:09:07.958498    4927 main.go:141] libmachine: Using API Version  1
	I0722 04:09:07.958507    4927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:09:07.958719    4927 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:09:07.958832    4927 main.go:141] libmachine: (multinode-688000) Calling .GetState
	I0722 04:09:07.958906    4927 main.go:141] libmachine: (multinode-688000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:09:07.958975    4927 main.go:141] libmachine: (multinode-688000) DBG | hyperkit pid from json: 4625
	I0722 04:09:07.960144    4927 status.go:330] multinode-688000 host status = "Running" (err=<nil>)
	I0722 04:09:07.960167    4927 host.go:66] Checking if "multinode-688000" exists ...
	I0722 04:09:07.960419    4927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:09:07.960443    4927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:09:07.968720    4927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52979
	I0722 04:09:07.969066    4927 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:09:07.969406    4927 main.go:141] libmachine: Using API Version  1
	I0722 04:09:07.969422    4927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:09:07.969622    4927 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:09:07.969727    4927 main.go:141] libmachine: (multinode-688000) Calling .GetIP
	I0722 04:09:07.969809    4927 host.go:66] Checking if "multinode-688000" exists ...
	I0722 04:09:07.970057    4927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:09:07.970082    4927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:09:07.978729    4927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52981
	I0722 04:09:07.979073    4927 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:09:07.979390    4927 main.go:141] libmachine: Using API Version  1
	I0722 04:09:07.979404    4927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:09:07.979637    4927 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:09:07.979766    4927 main.go:141] libmachine: (multinode-688000) Calling .DriverName
	I0722 04:09:07.979918    4927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 04:09:07.979942    4927 main.go:141] libmachine: (multinode-688000) Calling .GetSSHHostname
	I0722 04:09:07.980017    4927 main.go:141] libmachine: (multinode-688000) Calling .GetSSHPort
	I0722 04:09:07.980094    4927 main.go:141] libmachine: (multinode-688000) Calling .GetSSHKeyPath
	I0722 04:09:07.980179    4927 main.go:141] libmachine: (multinode-688000) Calling .GetSSHUsername
	I0722 04:09:07.980259    4927 sshutil.go:53] new ssh client: &{IP:192.169.0.15 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/multinode-688000/id_rsa Username:docker}
	I0722 04:09:08.016442    4927 ssh_runner.go:195] Run: systemctl --version
	I0722 04:09:08.020767    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:09:08.031375    4927 kubeconfig.go:125] found "multinode-688000" server: "https://192.169.0.15:8443"
	I0722 04:09:08.031397    4927 api_server.go:166] Checking apiserver status ...
	I0722 04:09:08.031431    4927 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0722 04:09:08.042257    4927 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1910/cgroup
	W0722 04:09:08.049359    4927 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1910/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0722 04:09:08.049400    4927 ssh_runner.go:195] Run: ls
	I0722 04:09:08.052788    4927 api_server.go:253] Checking apiserver healthz at https://192.169.0.15:8443/healthz ...
	I0722 04:09:08.055853    4927 api_server.go:279] https://192.169.0.15:8443/healthz returned 200:
	ok
	I0722 04:09:08.055864    4927 status.go:422] multinode-688000 apiserver status = Running (err=<nil>)
	I0722 04:09:08.055872    4927 status.go:257] multinode-688000 status: &{Name:multinode-688000 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 04:09:08.055884    4927 status.go:255] checking status of multinode-688000-m02 ...
	I0722 04:09:08.056137    4927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:09:08.056166    4927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:09:08.064819    4927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52985
	I0722 04:09:08.065171    4927 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:09:08.065501    4927 main.go:141] libmachine: Using API Version  1
	I0722 04:09:08.065511    4927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:09:08.065726    4927 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:09:08.065837    4927 main.go:141] libmachine: (multinode-688000-m02) Calling .GetState
	I0722 04:09:08.065933    4927 main.go:141] libmachine: (multinode-688000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:09:08.066006    4927 main.go:141] libmachine: (multinode-688000-m02) DBG | hyperkit pid from json: 4655
	I0722 04:09:08.067175    4927 status.go:330] multinode-688000-m02 host status = "Running" (err=<nil>)
	I0722 04:09:08.067185    4927 host.go:66] Checking if "multinode-688000-m02" exists ...
	I0722 04:09:08.067454    4927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:09:08.067485    4927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:09:08.075799    4927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52987
	I0722 04:09:08.076151    4927 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:09:08.076499    4927 main.go:141] libmachine: Using API Version  1
	I0722 04:09:08.076516    4927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:09:08.076730    4927 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:09:08.076843    4927 main.go:141] libmachine: (multinode-688000-m02) Calling .GetIP
	I0722 04:09:08.076924    4927 host.go:66] Checking if "multinode-688000-m02" exists ...
	I0722 04:09:08.077172    4927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:09:08.077194    4927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:09:08.085407    4927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52989
	I0722 04:09:08.085761    4927 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:09:08.086057    4927 main.go:141] libmachine: Using API Version  1
	I0722 04:09:08.086073    4927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:09:08.086291    4927 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:09:08.086414    4927 main.go:141] libmachine: (multinode-688000-m02) Calling .DriverName
	I0722 04:09:08.086538    4927 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0722 04:09:08.086551    4927 main.go:141] libmachine: (multinode-688000-m02) Calling .GetSSHHostname
	I0722 04:09:08.086644    4927 main.go:141] libmachine: (multinode-688000-m02) Calling .GetSSHPort
	I0722 04:09:08.086723    4927 main.go:141] libmachine: (multinode-688000-m02) Calling .GetSSHKeyPath
	I0722 04:09:08.086809    4927 main.go:141] libmachine: (multinode-688000-m02) Calling .GetSSHUsername
	I0722 04:09:08.086884    4927 sshutil.go:53] new ssh client: &{IP:192.169.0.16 Port:22 SSHKeyPath:/Users/jenkins/minikube-integration/19313-1111/.minikube/machines/multinode-688000-m02/id_rsa Username:docker}
	I0722 04:09:08.118754    4927 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0722 04:09:08.129391    4927 status.go:257] multinode-688000-m02 status: &{Name:multinode-688000-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0722 04:09:08.129414    4927 status.go:255] checking status of multinode-688000-m03 ...
	I0722 04:09:08.129702    4927 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:09:08.129726    4927 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:09:08.138338    4927 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:52992
	I0722 04:09:08.138684    4927 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:09:08.139035    4927 main.go:141] libmachine: Using API Version  1
	I0722 04:09:08.139049    4927 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:09:08.139240    4927 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:09:08.139363    4927 main.go:141] libmachine: (multinode-688000-m03) Calling .GetState
	I0722 04:09:08.139446    4927 main.go:141] libmachine: (multinode-688000-m03) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:09:08.139522    4927 main.go:141] libmachine: (multinode-688000-m03) DBG | hyperkit pid from json: 4720
	I0722 04:09:08.140653    4927 main.go:141] libmachine: (multinode-688000-m03) DBG | hyperkit pid 4720 missing from process table
	I0722 04:09:08.140672    4927 status.go:330] multinode-688000-m03 host status = "Stopped" (err=<nil>)
	I0722 04:09:08.140680    4927 status.go:343] host is not running, skipping remaining checks
	I0722 04:09:08.140686    4927 status.go:257] multinode-688000-m03 status: &{Name:multinode-688000-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (2.83s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (156.33s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 node start m03 -v=7 --alsologtostderr
E0722 04:10:49.695290    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 04:11:22.411470    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
multinode_test.go:282: (dbg) Done: out/minikube-darwin-amd64 -p multinode-688000 node start m03 -v=7 --alsologtostderr: (2m35.95218994s)
multinode_test.go:290: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (156.33s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (296.51s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-688000
multinode_test.go:321: (dbg) Run:  out/minikube-darwin-amd64 stop -p multinode-688000
multinode_test.go:321: (dbg) Done: out/minikube-darwin-amd64 stop -p multinode-688000: (18.839473588s)
multinode_test.go:326: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-688000 --wait=true -v=8 --alsologtostderr
E0722 04:12:46.624989    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 04:16:22.465385    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-688000 --wait=true -v=8 --alsologtostderr: (4m37.557662225s)
multinode_test.go:331: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-688000
--- PASS: TestMultiNode/serial/RestartKeepsNodes (296.51s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (3.41s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-darwin-amd64 -p multinode-688000 node delete m03: (3.075847251s)
multinode_test.go:422: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (3.41s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (16.81s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 stop
multinode_test.go:345: (dbg) Done: out/minikube-darwin-amd64 -p multinode-688000 stop: (16.658275s)
multinode_test.go:351: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-688000 status: exit status 7 (78.335618ms)

                                                
                                                
-- stdout --
	multinode-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:358: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status --alsologtostderr
multinode_test.go:358: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p multinode-688000 status --alsologtostderr: exit status 7 (77.135069ms)

                                                
                                                
-- stdout --
	multinode-688000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	multinode-688000-m02
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0722 04:17:01.236509    5108 out.go:291] Setting OutFile to fd 1 ...
	I0722 04:17:01.236799    5108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:17:01.236805    5108 out.go:304] Setting ErrFile to fd 2...
	I0722 04:17:01.236809    5108 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0722 04:17:01.236975    5108 root.go:338] Updating PATH: /Users/jenkins/minikube-integration/19313-1111/.minikube/bin
	I0722 04:17:01.237151    5108 out.go:298] Setting JSON to false
	I0722 04:17:01.237173    5108 mustload.go:65] Loading cluster: multinode-688000
	I0722 04:17:01.237215    5108 notify.go:220] Checking for updates...
	I0722 04:17:01.237471    5108 config.go:182] Loaded profile config "multinode-688000": Driver=hyperkit, ContainerRuntime=docker, KubernetesVersion=v1.30.3
	I0722 04:17:01.237486    5108 status.go:255] checking status of multinode-688000 ...
	I0722 04:17:01.237838    5108 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:17:01.237881    5108 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:17:01.246326    5108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53225
	I0722 04:17:01.246658    5108 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:17:01.247047    5108 main.go:141] libmachine: Using API Version  1
	I0722 04:17:01.247057    5108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:17:01.247253    5108 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:17:01.247367    5108 main.go:141] libmachine: (multinode-688000) Calling .GetState
	I0722 04:17:01.247483    5108 main.go:141] libmachine: (multinode-688000) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:17:01.247515    5108 main.go:141] libmachine: (multinode-688000) DBG | hyperkit pid from json: 5016
	I0722 04:17:01.248474    5108 main.go:141] libmachine: (multinode-688000) DBG | hyperkit pid 5016 missing from process table
	I0722 04:17:01.248495    5108 status.go:330] multinode-688000 host status = "Stopped" (err=<nil>)
	I0722 04:17:01.248503    5108 status.go:343] host is not running, skipping remaining checks
	I0722 04:17:01.248510    5108 status.go:257] multinode-688000 status: &{Name:multinode-688000 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0722 04:17:01.248540    5108 status.go:255] checking status of multinode-688000-m02 ...
	I0722 04:17:01.248782    5108 main.go:141] libmachine: Found binary path at /Users/jenkins/workspace/out/docker-machine-driver-hyperkit
	I0722 04:17:01.248804    5108 main.go:141] libmachine: Launching plugin server for driver hyperkit
	I0722 04:17:01.257024    5108 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:53227
	I0722 04:17:01.257352    5108 main.go:141] libmachine: () Calling .GetVersion
	I0722 04:17:01.257719    5108 main.go:141] libmachine: Using API Version  1
	I0722 04:17:01.257734    5108 main.go:141] libmachine: () Calling .SetConfigRaw
	I0722 04:17:01.257932    5108 main.go:141] libmachine: () Calling .GetMachineName
	I0722 04:17:01.258050    5108 main.go:141] libmachine: (multinode-688000-m02) Calling .GetState
	I0722 04:17:01.258133    5108 main.go:141] libmachine: (multinode-688000-m02) DBG | exe=/Users/jenkins/workspace/out/docker-machine-driver-hyperkit uid=0
	I0722 04:17:01.258204    5108 main.go:141] libmachine: (multinode-688000-m02) DBG | hyperkit pid from json: 5028
	I0722 04:17:01.259121    5108 status.go:330] multinode-688000-m02 host status = "Stopped" (err=<nil>)
	I0722 04:17:01.259132    5108 status.go:343] host is not running, skipping remaining checks
	I0722 04:17:01.259130    5108 main.go:141] libmachine: (multinode-688000-m02) DBG | hyperkit pid 5028 missing from process table
	I0722 04:17:01.259138    5108 status.go:257] multinode-688000-m02 status: &{Name:multinode-688000-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopMultiNode (16.81s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (101.12s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-688000 --wait=true -v=8 --alsologtostderr --driver=hyperkit 
E0722 04:17:46.682244    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
multinode_test.go:376: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-688000 --wait=true -v=8 --alsologtostderr --driver=hyperkit : (1m40.762504907s)
multinode_test.go:382: (dbg) Run:  out/minikube-darwin-amd64 -p multinode-688000 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (101.12s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (44.47s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-darwin-amd64 node list -p multinode-688000
multinode_test.go:464: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-688000-m02 --driver=hyperkit 
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p multinode-688000-m02 --driver=hyperkit : exit status 14 (432.456329ms)

                                                
                                                
-- stdout --
	* [multinode-688000-m02] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-688000-m02' is duplicated with machine name 'multinode-688000-m02' in profile 'multinode-688000'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-darwin-amd64 start -p multinode-688000-m03 --driver=hyperkit 
multinode_test.go:472: (dbg) Done: out/minikube-darwin-amd64 start -p multinode-688000-m03 --driver=hyperkit : (40.318096316s)
multinode_test.go:479: (dbg) Run:  out/minikube-darwin-amd64 node add -p multinode-688000
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-darwin-amd64 node add -p multinode-688000: exit status 80 (267.115458ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-688000 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-688000-m03 already exists in multinode-688000-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                                                         │
	│    * If the above advice does not help, please let us know:                                                             │
	│      https://github.com/kubernetes/minikube/issues/new/choose                                                           │
	│                                                                                                                         │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.                                │
	│    * Please also attach the following file to the GitHub issue:                                                         │
	│    * - /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_0.log    │
	│                                                                                                                         │
	╰─────────────────────────────────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-darwin-amd64 delete -p multinode-688000-m03
E0722 04:19:25.514491    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
multinode_test.go:484: (dbg) Done: out/minikube-darwin-amd64 delete -p multinode-688000-m03: (3.396047465s)
--- PASS: TestMultiNode/serial/ValidateNameConflict (44.47s)

                                                
                                    
x
+
TestPreload (215.04s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4
preload_test.go:44: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-140000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.24.4: (1m16.685939776s)
preload_test.go:52: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-140000 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-darwin-amd64 -p test-preload-140000 image pull gcr.io/k8s-minikube/busybox: (1.296209374s)
preload_test.go:58: (dbg) Run:  out/minikube-darwin-amd64 stop -p test-preload-140000
preload_test.go:58: (dbg) Done: out/minikube-darwin-amd64 stop -p test-preload-140000: (8.376975082s)
preload_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p test-preload-140000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit 
E0722 04:21:22.458341    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 04:22:46.674577    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
preload_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p test-preload-140000 --memory=2200 --alsologtostderr -v=1 --wait=true --driver=hyperkit : (2m3.281493086s)
preload_test.go:71: (dbg) Run:  out/minikube-darwin-amd64 -p test-preload-140000 image list
helpers_test.go:175: Cleaning up "test-preload-140000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p test-preload-140000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p test-preload-140000: (5.241426743s)
--- PASS: TestPreload (215.04s)

                                                
                                    
x
+
TestScheduledStopUnix (106.64s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-darwin-amd64 start -p scheduled-stop-289000 --memory=2048 --driver=hyperkit 
scheduled_stop_test.go:128: (dbg) Done: out/minikube-darwin-amd64 start -p scheduled-stop-289000 --memory=2048 --driver=hyperkit : (35.235729667s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-289000 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.TimeToStop}} -p scheduled-stop-289000 -n scheduled-stop-289000
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-289000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-289000 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-289000 -n scheduled-stop-289000
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-289000
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-darwin-amd64 stop -p scheduled-stop-289000 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 status -p scheduled-stop-289000
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p scheduled-stop-289000: exit status 7 (71.66377ms)

                                                
                                                
-- stdout --
	scheduled-stop-289000
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-289000 -n scheduled-stop-289000
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p scheduled-stop-289000 -n scheduled-stop-289000: exit status 7 (67.670335ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-289000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p scheduled-stop-289000
--- PASS: TestScheduledStopUnix (106.64s)

                                                
                                    
x
+
TestSkaffold (116.21s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:59: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe369211206 version
skaffold_test.go:59: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe369211206 version: (1.70644764s)
skaffold_test.go:63: skaffold version: v2.12.0
skaffold_test.go:66: (dbg) Run:  out/minikube-darwin-amd64 start -p skaffold-264000 --memory=2600 --driver=hyperkit 
skaffold_test.go:66: (dbg) Done: out/minikube-darwin-amd64 start -p skaffold-264000 --memory=2600 --driver=hyperkit : (38.832011406s)
skaffold_test.go:86: copying out/minikube-darwin-amd64 to /Users/jenkins/workspace/out/minikube
skaffold_test.go:105: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe369211206 run --minikube-profile skaffold-264000 --kube-context skaffold-264000 --status-check=true --port-forward=false --interactive=false
E0722 04:26:22.452110    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
skaffold_test.go:105: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/skaffold.exe369211206 run --minikube-profile skaffold-264000 --kube-context skaffold-264000 --status-check=true --port-forward=false --interactive=false: (56.995204808s)
skaffold_test.go:111: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-app" in namespace "default" ...
helpers_test.go:344: "leeroy-app-d474dff8d-4cjbg" [fb24d643-0fc9-498b-9800-e83932e652ee] Running
skaffold_test.go:111: (dbg) TestSkaffold: app=leeroy-app healthy within 6.004815605s
skaffold_test.go:114: (dbg) TestSkaffold: waiting 1m0s for pods matching "app=leeroy-web" in namespace "default" ...
helpers_test.go:344: "leeroy-web-7567475944-wgbtv" [17c5e223-6545-4a9e-b9b5-77734ec3da23] Running
skaffold_test.go:114: (dbg) TestSkaffold: app=leeroy-web healthy within 5.005088663s
helpers_test.go:175: Cleaning up "skaffold-264000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p skaffold-264000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p skaffold-264000: (5.244126814s)
--- PASS: TestSkaffold (116.21s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (72.6s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1031519347 start -p running-upgrade-532000 --memory=2200 --vm-driver=hyperkit 
version_upgrade_test.go:120: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.1031519347 start -p running-upgrade-532000 --memory=2200 --vm-driver=hyperkit : (42.740539485s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-darwin-amd64 start -p running-upgrade-532000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:130: (dbg) Done: out/minikube-darwin-amd64 start -p running-upgrade-532000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (23.464170096s)
helpers_test.go:175: Cleaning up "running-upgrade-532000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p running-upgrade-532000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p running-upgrade-532000: (5.247532066s)
--- PASS: TestRunningBinaryUpgrade (72.60s)

                                                
                                    
x
+
TestKubernetesUpgrade (240.57s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:222: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=hyperkit : (54.182668452s)
version_upgrade_test.go:227: (dbg) Run:  out/minikube-darwin-amd64 stop -p kubernetes-upgrade-759000
E0722 04:34:22.927774    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
version_upgrade_test.go:227: (dbg) Done: out/minikube-darwin-amd64 stop -p kubernetes-upgrade-759000: (8.388368119s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-darwin-amd64 -p kubernetes-upgrade-759000 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p kubernetes-upgrade-759000 status --format={{.Host}}: exit status 7 (66.742947ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
E0722 04:36:05.562428    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
version_upgrade_test.go:243: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (2m27.667541205s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-759000 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit 
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.20.0 --driver=hyperkit : exit status 106 (536.197335ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-759000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.31.0-beta.0 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-759000
	    minikube start -p kubernetes-upgrade-759000 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-7590002 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.31.0-beta.0, by running:
	    
	    minikube start -p kubernetes-upgrade-759000 --kubernetes-version=v1.31.0-beta.0
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit 
E0722 04:37:06.766927    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
version_upgrade_test.go:275: (dbg) Done: out/minikube-darwin-amd64 start -p kubernetes-upgrade-759000 --memory=2200 --kubernetes-version=v1.31.0-beta.0 --alsologtostderr -v=1 --driver=hyperkit : (24.43962384s)
helpers_test.go:175: Cleaning up "kubernetes-upgrade-759000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p kubernetes-upgrade-759000
helpers_test.go:178: (dbg) Done: out/minikube-darwin-amd64 delete -p kubernetes-upgrade-759000: (5.24276755s)
--- PASS: TestKubernetesUpgrade (240.57s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (1.36s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (107.33s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2069109420 start -p stopped-upgrade-692000 --memory=2200 --vm-driver=hyperkit 
E0722 04:27:29.735346    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 04:27:46.667987    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
version_upgrade_test.go:183: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2069109420 start -p stopped-upgrade-692000 --memory=2200 --vm-driver=hyperkit : (56.273883652s)
version_upgrade_test.go:192: (dbg) Run:  /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2069109420 -p stopped-upgrade-692000 stop
version_upgrade_test.go:192: (dbg) Done: /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/minikube-v1.26.0.2069109420 -p stopped-upgrade-692000 stop: (8.258667149s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-darwin-amd64 start -p stopped-upgrade-692000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit 
version_upgrade_test.go:198: (dbg) Done: out/minikube-darwin-amd64 start -p stopped-upgrade-692000 --memory=2200 --alsologtostderr -v=1 --driver=hyperkit : (42.79968113s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (107.33s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (2.46s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-darwin-amd64 logs -p stopped-upgrade-692000
version_upgrade_test.go:206: (dbg) Done: out/minikube-darwin-amd64 logs -p stopped-upgrade-692000: (2.463616028s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (2.46s)

                                                
                                    
x
+
TestPause/serial/Start (58.18s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-370000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit 
pause_test.go:80: (dbg) Done: out/minikube-darwin-amd64 start -p pause-370000 --memory=2048 --install-addons=false --wait=all --driver=hyperkit : (58.177599922s)
--- PASS: TestPause/serial/Start (58.18s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.66s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-533000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit 
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-darwin-amd64 start -p NoKubernetes-533000 --no-kubernetes --kubernetes-version=1.20 --driver=hyperkit : exit status 14 (662.058767ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-533000] minikube v1.33.1 on Darwin 14.5
	  - MINIKUBE_LOCATION=19313
	  - KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
	  - MINIKUBE_BIN=out/minikube-darwin-amd64
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - MINIKUBE_HOME=/Users/jenkins/minikube-integration/19313-1111/.minikube
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.66s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (42.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-533000 --driver=hyperkit 
no_kubernetes_test.go:95: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-533000 --driver=hyperkit : (41.963005032s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-533000 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (42.13s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (41.57s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-darwin-amd64 start -p pause-370000 --alsologtostderr -v=1 --driver=hyperkit 
pause_test.go:92: (dbg) Done: out/minikube-darwin-amd64 start -p pause-370000 --alsologtostderr -v=1 --driver=hyperkit : (41.55583974s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (41.57s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (8.39s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-533000 --no-kubernetes --driver=hyperkit 
no_kubernetes_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-533000 --no-kubernetes --driver=hyperkit : (5.864830974s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-darwin-amd64 -p NoKubernetes-533000 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-darwin-amd64 -p NoKubernetes-533000 status -o json: exit status 2 (141.712571ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-533000","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-darwin-amd64 delete -p NoKubernetes-533000
no_kubernetes_test.go:124: (dbg) Done: out/minikube-darwin-amd64 delete -p NoKubernetes-533000: (2.386631178s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (8.39s)

                                                
                                    
x
+
TestPause/serial/Pause (0.56s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-370000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.56s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.16s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-darwin-amd64 status -p pause-370000 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-darwin-amd64 status -p pause-370000 --output=json --layout=cluster: exit status 2 (163.140793ms)

                                                
                                                
-- stdout --
	{"Name":"pause-370000","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 12 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.33.1","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-370000","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.16s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.53s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-darwin-amd64 unpause -p pause-370000 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.53s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (0.53s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-darwin-amd64 pause -p pause-370000 --alsologtostderr -v=5
--- PASS: TestPause/serial/PauseAgain (0.53s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (5.25s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-darwin-amd64 delete -p pause-370000 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-darwin-amd64 delete -p pause-370000 --alsologtostderr -v=5: (5.245139292s)
--- PASS: TestPause/serial/DeletePaused (5.25s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-darwin-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.19s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (4.24s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19313
- KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3985413819/001
* Using the hyperkit driver based on user configuration
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3985413819/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3985413819/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.11.0-to-current3985413819/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.11.0-to-current (4.24s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.75s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current
* minikube v1.33.1 on darwin
- MINIKUBE_LOCATION=19313
- KUBECONFIG=/Users/jenkins/minikube-integration/19313-1111/kubeconfig
- MINIKUBE_BIN=out/minikube-darwin-amd64
- MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
- MINIKUBE_FORCE_SYSTEMD=
- MINIKUBE_HOME=/var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current340045542/001
* Using the hyperkit driver based on user configuration
* Downloading driver docker-machine-driver-hyperkit:
* The 'hyperkit' driver requires elevated permissions. The following commands will be executed:

                                                
                                                
$ sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current340045542/001/.minikube/bin/docker-machine-driver-hyperkit 
$ sudo chmod u+s /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current340045542/001/.minikube/bin/docker-machine-driver-hyperkit 

                                                
                                                

                                                
                                                
! Unable to update hyperkit driver: [sudo chown root:wheel /var/folders/vq/yhv778t970xgml0dzm5fdwlr0000gp/T/TestHyperkitDriverSkipUpgradeupgrade-v1.2.0-to-current340045542/001/.minikube/bin/docker-machine-driver-hyperkit] requires a password, and --interactive=false
* Starting "minikube" primary control-plane node in "minikube" cluster
* Download complete!
--- PASS: TestHyperkitDriverSkipUpgrade/upgrade-v1.2.0-to-current (7.75s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-533000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-533000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (127.983249ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.13s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.36s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-darwin-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-darwin-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.36s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (8.38s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-darwin-amd64 stop -p NoKubernetes-533000
E0722 04:31:22.512084    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
no_kubernetes_test.go:158: (dbg) Done: out/minikube-darwin-amd64 stop -p NoKubernetes-533000: (8.384452247s)
--- PASS: TestNoKubernetes/serial/Stop (8.38s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (19.58s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-darwin-amd64 start -p NoKubernetes-533000 --driver=hyperkit 
E0722 04:31:39.080445    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:39.086332    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:39.097143    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:39.117299    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:39.159072    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:39.240003    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:39.400208    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:39.720622    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:40.361873    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:41.642427    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:31:44.204392    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
no_kubernetes_test.go:191: (dbg) Done: out/minikube-darwin-amd64 start -p NoKubernetes-533000 --driver=hyperkit : (19.584453541s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (19.58s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-darwin-amd64 ssh -p NoKubernetes-533000 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-darwin-amd64 ssh -p NoKubernetes-533000 "sudo systemctl is-active --quiet service kubelet": exit status 1 (129.841812ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.13s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (149.69s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-732000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
E0722 04:37:46.724534    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-732000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (2m29.69110943s)
--- PASS: TestStartStop/group/old-k8s-version/serial/FirstStart (149.69s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (97.9s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-189000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-189000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (1m37.898559946s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (97.90s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-732000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [743d299e-b551-4c9d-aff0-4bc3fbbc4a71] Pending
helpers_test.go:344: "busybox" [743d299e-b551-4c9d-aff0-4bc3fbbc4a71] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [743d299e-b551-4c9d-aff0-4bc3fbbc4a71] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/old-k8s-version/serial/DeployApp: integration-test=busybox healthy within 9.002888958s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-732000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/old-k8s-version/serial/DeployApp (9.49s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p old-k8s-version-732000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-732000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (0.73s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (8.39s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p old-k8s-version-732000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p old-k8s-version-732000 --alsologtostderr -v=3: (8.388362086s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (8.39s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-732000 -n old-k8s-version-732000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-732000 -n old-k8s-version-732000: exit status 7 (68.761185ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p old-k8s-version-732000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.32s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (416.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p old-k8s-version-732000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p old-k8s-version-732000 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=hyperkit  --kubernetes-version=v1.20.0: (6m56.528276889s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p old-k8s-version-732000 -n old-k8s-version-732000
--- PASS: TestStartStop/group/old-k8s-version/serial/SecondStart (416.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.21s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-189000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [fdb21f3c-9fa2-4a75-a752-bb65d4102d4a] Pending
helpers_test.go:344: "busybox" [fdb21f3c-9fa2-4a75-a752-bb65d4102d4a] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [fdb21f3c-9fa2-4a75-a752-bb65d4102d4a] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.004604277s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-189000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.21s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.76s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p no-preload-189000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-189000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (0.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (8.44s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p no-preload-189000 --alsologtostderr -v=3
E0722 04:41:22.502869    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p no-preload-189000 --alsologtostderr -v=3: (8.441429082s)
--- PASS: TestStartStop/group/no-preload/serial/Stop (8.44s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-189000 -n no-preload-189000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-189000 -n no-preload-189000: exit status 7 (66.896881ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p no-preload-189000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (0.34s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (288.53s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p no-preload-189000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0722 04:41:39.070545    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:42:46.717548    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 04:44:09.786016    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p no-preload-189000 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (4m48.360814289s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p no-preload-189000 -n no-preload-189000
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (288.53s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-7lkxh" [8fbe1b36-dcb7-479b-b36b-a07efac1c8e4] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004026938s
--- PASS: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-5cc9f66cf4-7lkxh" [8fbe1b36-dcb7-479b-b36b-a07efac1c8e4] Running
E0722 04:46:22.579250    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.00309137s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-189000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p no-preload-189000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/no-preload/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Pause (2.07s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p no-preload-189000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-189000 -n no-preload-189000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-189000 -n no-preload-189000: exit status 2 (159.208057ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-189000 -n no-preload-189000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-189000 -n no-preload-189000: exit status 2 (158.882355ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p no-preload-189000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p no-preload-189000 -n no-preload-189000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p no-preload-189000 -n no-preload-189000
--- PASS: TestStartStop/group/no-preload/serial/Pause (2.07s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.93s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-961000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
E0722 04:46:39.147080    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-961000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: (51.93471635s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (51.93s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-c547w" [33aa077f-a030-459a-b351-2de71661b48c] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.004265996s
--- PASS: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (6.01s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-cd95d586-c547w" [33aa077f-a030-459a-b351-2de71661b48c] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.002650364s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-732000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.22s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-961000 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [9d619a09-97c1-4f1e-88ad-05a4d5a0799d] Pending
helpers_test.go:344: "busybox" [9d619a09-97c1-4f1e-88ad-05a4d5a0799d] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [9d619a09-97c1-4f1e-88ad-05a4d5a0799d] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 8.00337389s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-961000 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (8.22s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p old-k8s-version-732000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/old-k8s-version/serial/VerifyKubernetesImages (0.15s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Pause (1.88s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p old-k8s-version-732000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-732000 -n old-k8s-version-732000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-732000 -n old-k8s-version-732000: exit status 2 (162.369595ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-732000 -n old-k8s-version-732000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-732000 -n old-k8s-version-732000: exit status 2 (159.54148ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p old-k8s-version-732000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p old-k8s-version-732000 -n old-k8s-version-732000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p old-k8s-version-732000 -n old-k8s-version-732000
--- PASS: TestStartStop/group/old-k8s-version/serial/Pause (1.88s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p default-k8s-diff-port-961000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-961000 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (0.74s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (8.42s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p default-k8s-diff-port-961000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p default-k8s-diff-port-961000 --alsologtostderr -v=3: (8.418131187s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Stop (8.42s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (157.71s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-721000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
start_stop_delete_test.go:186: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-721000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (2m37.710355317s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (157.71s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-961000 -n default-k8s-diff-port-961000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-961000 -n default-k8s-diff-port-961000: exit status 7 (66.444907ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p default-k8s-diff-port-961000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p default-k8s-diff-port-961000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3
E0722 04:47:46.797489    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 04:48:02.200402    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:49:58.909855    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:49:58.916103    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:49:58.926549    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:49:58.946879    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:49:58.988066    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:49:59.070148    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:49:59.230612    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:49:59.552287    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:50:00.193272    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:50:01.474885    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:50:04.036244    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:50:09.157187    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p default-k8s-diff-port-961000 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=hyperkit  --kubernetes-version=v1.30.3: (4m59.820347521s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p default-k8s-diff-port-961000 -n default-k8s-diff-port-961000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (300.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-darwin-amd64 addons enable metrics-server -p newest-cni-721000 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (0.86s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (8.41s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p newest-cni-721000 --alsologtostderr -v=3
E0722 04:50:19.398833    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p newest-cni-721000 --alsologtostderr -v=3: (8.405464275s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (8.41s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-721000 -n newest-cni-721000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-721000 -n newest-cni-721000: exit status 7 (66.523753ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p newest-cni-721000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.33s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (145.17s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p newest-cni-721000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0
E0722 04:50:39.880559    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:51:06.762293    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:06.768729    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:06.781003    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:06.801334    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:06.842655    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:06.923977    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:07.084156    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:07.405575    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:08.046930    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:09.327366    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:11.887690    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:17.009996    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:20.840490    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
E0722 04:51:22.576431    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 04:51:27.250413    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:51:39.145090    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:51:47.732406    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:52:28.693151    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p newest-cni-721000 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=hyperkit  --kubernetes-version=v1.31.0-beta.0: (2m25.006618904s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p newest-cni-721000 -n newest-cni-721000
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (145.17s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fnpc2" [d4981a3c-8fa2-4a9b-ac04-de8ab0a3a604] Running
E0722 04:52:42.760139    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 6.002823143s
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (6.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p newest-cni-721000 image list --format=json
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.16s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (1.83s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p newest-cni-721000 --alsologtostderr -v=1
E0722 04:52:45.632629    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-721000 -n newest-cni-721000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-721000 -n newest-cni-721000: exit status 2 (154.578566ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-721000 -n newest-cni-721000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-721000 -n newest-cni-721000: exit status 2 (151.67244ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p newest-cni-721000 --alsologtostderr -v=1
E0722 04:52:46.793609    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p newest-cni-721000 -n newest-cni-721000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p newest-cni-721000 -n newest-cni-721000
--- PASS: TestStartStop/group/newest-cni/serial/Pause (1.83s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-fnpc2" [d4981a3c-8fa2-4a9b-ac04-de8ab0a3a604] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003713055s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-961000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (5.06s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p default-k8s-diff-port-961000 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: gcr.io/k8s-minikube/busybox:1.28.4-glibc
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/VerifyKubernetesImages (0.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Pause (2.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p default-k8s-diff-port-961000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-961000 -n default-k8s-diff-port-961000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-961000 -n default-k8s-diff-port-961000: exit status 2 (162.700225ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-961000 -n default-k8s-diff-port-961000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-961000 -n default-k8s-diff-port-961000: exit status 2 (166.806241ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p default-k8s-diff-port-961000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-961000 -n default-k8s-diff-port-961000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p default-k8s-diff-port-961000 -n default-k8s-diff-port-961000
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/Pause (2.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (60.6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p auto-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit 
E0722 04:53:50.613450    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p auto-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=hyperkit : (1m0.601211654s)
--- PASS: TestNetworkPlugins/group/auto/Start (60.60s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p auto-978000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (11.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-978000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-b5z7f" [c5263a53-6e3f-4014-bf5f-ec93d7af1b9e] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-b5z7f" [c5263a53-6e3f-4014-bf5f-ec93d7af1b9e] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 11.002827834s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (11.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-978000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (82.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p calico-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit 
E0722 04:54:58.907266    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p calico-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=hyperkit : (1m22.449647726s)
--- PASS: TestNetworkPlugins/group/calico/Start (82.45s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (8.38s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-darwin-amd64 stop -p embed-certs-781000 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-darwin-amd64 stop -p embed-certs-781000 --alsologtostderr -v=3: (8.381180187s)
--- PASS: TestStartStop/group/embed-certs/serial/Stop (8.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000: exit status 7 (66.19843ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-darwin-amd64 addons enable dashboard -p embed-certs-781000 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (0.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (51.59s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-darwin-amd64 start -p embed-certs-781000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3
E0722 04:55:26.599703    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-darwin-amd64 start -p embed-certs-781000 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=hyperkit  --kubernetes-version=v1.30.3: (51.427853491s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Host}} -p embed-certs-781000 -n embed-certs-781000
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (51.59s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-f2qcd" [af38e081-e880-4b36-843e-ac7f36c0bbc4] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.00444948s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p calico-978000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-978000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-jv9xz" [d5d81148-1dda-4e55-8e27-9e5cebdd7190] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-jv9xz" [d5d81148-1dda-4e55-8e27-9e5cebdd7190] Running
E0722 04:56:06.759869    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.002968912s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-978000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.10s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-flpvx" [da0aa37c-4ecb-4749-b675-873d531a53cd] Pending / Ready:ContainersNotReady (containers with unready status: [kubernetes-dashboard]) / ContainersReady:ContainersNotReady (containers with unready status: [kubernetes-dashboard])
helpers_test.go:344: "kubernetes-dashboard-779776cb65-flpvx" [da0aa37c-4ecb-4749-b675-873d531a53cd] Running
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 11.003909455s
--- PASS: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (11.00s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:344: "kubernetes-dashboard-779776cb65-flpvx" [da0aa37c-4ecb-4749-b675-873d531a53cd] Running
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: k8s-app=kubernetes-dashboard healthy within 5.003752565s
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-781000 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
--- PASS: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (5.07s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-darwin-amd64 -p embed-certs-781000 image list --format=json
--- PASS: TestStartStop/group/embed-certs/serial/VerifyKubernetesImages (0.15s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Pause (2.08s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 pause -p embed-certs-781000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-781000 -n embed-certs-781000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-781000 -n embed-certs-781000: exit status 2 (230.274529ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-781000 -n embed-certs-781000
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-781000 -n embed-certs-781000: exit status 2 (201.772107ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 unpause -p embed-certs-781000 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.APIServer}} -p embed-certs-781000 -n embed-certs-781000
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-darwin-amd64 status --format={{.Kubelet}} -p embed-certs-781000 -n embed-certs-781000
--- PASS: TestStartStop/group/embed-certs/serial/Pause (2.08s)
E0722 05:03:53.723103    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (64.08s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p custom-flannel-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p custom-flannel-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=hyperkit : (1m4.080362456s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (64.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Start (101.35s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p false-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit 
E0722 04:56:34.453703    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 04:56:39.144527    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 04:57:22.879199    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:22.884758    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:22.895681    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:22.915831    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:22.955950    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:23.037474    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:23.197952    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:23.519273    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:24.160341    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:25.440446    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 04:57:28.001757    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p false-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=false --driver=hyperkit : (1m41.352494873s)
--- PASS: TestNetworkPlugins/group/false/Start (101.35s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p custom-flannel-978000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-978000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-9hw67" [31d3e9a7-f627-4cd9-bb87-ddecb1b341f1] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0722 04:57:33.121942    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-9hw67" [31d3e9a7-f627-4cd9-bb87-ddecb1b341f1] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 11.004818955s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-978000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (71.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kindnet-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit 
E0722 04:58:03.843704    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kindnet-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=hyperkit : (1m11.157717711s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (71.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p false-978000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/false/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/NetCatPod (12.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context false-978000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-q54s6" [1d742617-8628-4aca-a5bd-61b3ac487967] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-q54s6" [1d742617-8628-4aca-a5bd-61b3ac487967] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/false/NetCatPod: app=netcat healthy within 12.002998425s
--- PASS: TestNetworkPlugins/group/false/NetCatPod (12.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/DNS
net_test.go:175: (dbg) Run:  kubectl --context false-978000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/false/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/Localhost
net_test.go:194: (dbg) Run:  kubectl --context false-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/false/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false/HairPin
net_test.go:264: (dbg) Run:  kubectl --context false-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/false/HairPin (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (61.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p flannel-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit 
E0722 04:59:00.325762    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:00.331167    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:00.341503    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:00.362237    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:00.402313    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:00.482416    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:00.642651    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:00.963102    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:01.604878    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:02.885724    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:05.447590    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 04:59:10.567715    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p flannel-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=hyperkit : (1m1.302047635s)
--- PASS: TestNetworkPlugins/group/flannel/Start (61.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-g6qv4" [04a7bd69-2e87-42ed-a79c-afdcece2e927] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.003017066s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kindnet-978000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (10.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-978000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2hlhl" [b89229c0-b32c-4cb9-b03c-4e837dd1d927] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0722 04:59:20.808242    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-2hlhl" [b89229c0-b32c-4cb9-b03c-4e837dd1d927] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 10.004060734s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (10.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-978000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (207.65s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p enable-default-cni-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit 
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p enable-default-cni-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=hyperkit : (3m27.649968911s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (207.65s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-d7ztx" [b90869ff-ca33-41a3-8188-505645b78cfc] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.002338775s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.00s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p flannel-978000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (11.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-978000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zv2q9" [541fd0d3-084c-480d-867e-1ab4779cf084] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-zv2q9" [541fd0d3-084c-480d-867e-1ab4779cf084] Running
E0722 04:59:58.904457    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/old-k8s-version-732000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 11.003693651s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (11.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-978000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (91.77s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p bridge-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit 
E0722 05:00:22.249770    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
E0722 05:00:49.860610    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 05:00:51.118845    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:51.124568    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:51.136588    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:51.157874    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:51.198712    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:51.279120    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:51.439523    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:51.759808    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:52.401225    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:53.683054    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:00:56.243215    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:01:01.363789    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:01:06.757669    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/no-preload-189000/client.crt: no such file or directory
E0722 05:01:11.605782    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:01:22.572546    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/functional-963000/client.crt: no such file or directory
E0722 05:01:32.087609    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/calico-978000/client.crt: no such file or directory
E0722 05:01:39.140464    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/skaffold-264000/client.crt: no such file or directory
E0722 05:01:44.170956    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p bridge-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=hyperkit : (1m31.765777719s)
--- PASS: TestNetworkPlugins/group/bridge/Start (91.77s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p bridge-978000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (10.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-978000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-hvljf" [3a6aed2a-af1c-4a65-abd0-7a31c6783f51] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-6bc787d567-hvljf" [3a6aed2a-af1c-4a65-abd0-7a31c6783f51] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 10.002386278s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (10.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-978000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.12s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.11s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Start (93.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Start
net_test.go:112: (dbg) Run:  out/minikube-darwin-amd64 start -p kubenet-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit 
E0722 05:02:22.876425    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 05:02:31.769193    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:31.775391    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:31.785904    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:31.808012    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:31.850121    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:31.930230    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:32.090436    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:32.410571    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:33.051483    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:34.332170    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:36.894255    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:42.015835    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:02:46.788520    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/addons-616000/client.crt: no such file or directory
E0722 05:02:50.566758    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/default-k8s-diff-port-961000/client.crt: no such file or directory
E0722 05:02:52.256209    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
E0722 05:03:12.736303    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/custom-flannel-978000/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-darwin-amd64 start -p kubenet-978000 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --network-plugin=kubenet --driver=hyperkit : (1m33.10142665s)
--- PASS: TestNetworkPlugins/group/kubenet/Start (93.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p enable-default-cni-978000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-978000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-zb77s" [c3b52324-b75c-4aea-ae16-152c2d2d05eb] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0722 05:03:16.008490    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:03:16.014059    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:03:16.024387    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:03:16.044523    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:03:16.084737    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:03:16.164975    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:03:16.326616    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:03:16.646817    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:03:17.287041    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:03:18.567238    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-zb77s" [c3b52324-b75c-4aea-ae16-152c2d2d05eb] Running
E0722 05:03:21.127643    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 11.004668099s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (11.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-978000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-darwin-amd64 ssh -p kubenet-978000 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kubenet/KubeletFlags (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/NetCatPod (10.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kubenet-978000 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-6bc787d567-2bwvz" [13e66808-80ba-4218-a36d-24d5aad62939] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0722 05:03:56.998669    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/false-978000/client.crt: no such file or directory
E0722 05:04:00.354557    1637 cert_rotation.go:168] key failed with : open /Users/jenkins/minikube-integration/19313-1111/.minikube/profiles/auto-978000/client.crt: no such file or directory
helpers_test.go:344: "netcat-6bc787d567-2bwvz" [13e66808-80ba-4218-a36d-24d5aad62939] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kubenet/NetCatPod: app=netcat healthy within 10.00460789s
--- PASS: TestNetworkPlugins/group/kubenet/NetCatPod (10.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kubenet-978000 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kubenet/DNS (0.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/Localhost (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kubenet-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kubenet/Localhost (0.10s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet/HairPin (0.1s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kubenet-978000 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kubenet/HairPin (0.10s)

                                                
                                    

Test skip (22/343)

x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.30.3/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.30.3/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.30.3/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.30.3/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.31.0-beta.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.31.0-beta.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.31.0-beta.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:500: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with docker false darwin amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
driver_install_or_update_test.go:41: Skip if not linux.
--- SKIP: TestKVMDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing docker
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild/serial/validateImageBuildWithBuildEnv (0s)

                                                
                                                
=== RUN   TestImageBuild/serial/validateImageBuildWithBuildEnv
image_test.go:114: skipping due to https://github.com/kubernetes/minikube/issues/12431
--- SKIP: TestImageBuild/serial/validateImageBuildWithBuildEnv (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.29s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-179000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p disable-driver-mounts-179000
--- SKIP: TestStartStop/group/disable-driver-mounts (0.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (5.46s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-978000 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-978000" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-978000

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-978000" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-978000"

                                                
                                                
----------------------- debugLogs end: cilium-978000 [took: 5.241053549s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-978000" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-darwin-amd64 delete -p cilium-978000
--- SKIP: TestNetworkPlugins/group/cilium (5.46s)

                                                
                                    
Copied to clipboard